So @ggamel, you are the founder of EyeGuide, tell us about that.
Thanks for inviting me to this interview, Ryan. Happy to be here.
Uh, hey, everyone! I’m Greg. I’m the CTO and Co-Founder of EyeGuide, a company that creates novel eye-tracking hardware and software. Right now, we make a system that rapidly records and scores eye movements to potentially determine if something might be going on in their body that could impact their ability to focus on moving objects.
My path to where I am now began years ago when my first job after college was as a webmaster for an internet service provider (ISP). I was thrown into the fire and had to suddenly update, design, and develop thousands of customer web pages. It was a weird ISP product offering, but I did it regardless. It exposed me to some awful legacy code, which was fun.
This let me continue doing web design and development, but I started wanting to return to school. So I did. I went back to university, did my postgraduate work, and had the opportunity to sync back up with a mentor from my undergrad days.
In our usability lab, we wanted an eye-tracking system. But we couldn’t afford them because they were all USD 40-50k at the time.
So what did we do? We built our own.
Initially, we spun off from Texas Tech and sold affordable eye-tracking “labs” in a box. Basically, affordable eye-tracking software and hardware for researchers around the world to use for anything they could imagine.
It was tough. Really tough. But fun. I learned all about PCBAs, rapid prototyping of hardware with 3D printing, injection molding, worked internationally with EE firms and suppliers in Taiwan, China, and India.
And then the software: we made Windows, Linux, Mac, and iPadOS apps
Our software helped researchers create eye-tracking projects, record video of the eye movements and from the user’s perspective, and then analyze and clean all that collected data, then apply visualizations, and export everything for further research.
It was cool being the first company to offer an iPad app on the App Store that connected to commercially-available eye-tracking hardware. And it was a great learning experience maintaining and building native software for multiple platforms.
My work spanned all sorts of things: user interviews, usability studies, fieldwork, software prototyping, UI work, debugging, hardware prototyping, and more.
We realized, though, that applying the underlying eye-tracking tech to specific areas would be a more significant opportunity.
So we did a classic tech move and pivoted. And that’s what brought EyeGuide to where it is today: making a hardware and software system that rapidly records and scores eye movements.
So I went from webmaster -> co-founder -> a whole bunch of software, hardware, and other work > UI developer and prototyper -> operations and software work -> operations and managing tech -> CTO currently.
😅 a wild and unexpected path for me.
Who are the primary customers of Eye Guide? Hospitals, Optometrists? Sports Agencies? Researchers?
Several professionals and organizations like using a tool that uses a 10-second eye-tracking test to quickly score and present data that an expert can interpret and use in their separate assessments.
The current system helps users run a 10-second test, captures thousands of eye movement data points, then scores & presents a visual + numerical value to the test administrator. That data can then be used by a professional–a certified athletic trainer, doc, nurse, or researcher–to interpret later.
Do a lot of marketing organizations use this?
Many marketers and firms used the previous product iteration.
The previous product was: 1) a wireless eye-tracking headset with a scene camera, eye camera, & audio, and 2) a recording pack with a high-capacity battery & storage.
This form factor allowed marketers and other customers to put the equipment on a test participant who could move or be stationary. Flexibility!
Then marketers could see and understand everything the person saw once they reviewed the data on their computers later.
Sticking with the software, what type of development is used for the device? Is it embedded C++ or something else that connects to the cloud?
Yep, we’ve used C++ for client and server since the beginning. Light Linux variant for the embedded system at the hardware level. Web app using TypeScript + React. Client uses C++ and sparingly some Obj-C & Obj-C++.
It’s been fascinating to witness the tech shifts over the years. Initially, we used C++ for its cross-platform prowess and efficiency for native desktop software applications. It’s great for building complex desktop software for recording and analyzing research data. For iPadOS, it’s been an interesting process to make native apps. As a developer, not being all-in on Swift, SwiftUI, and Obj-C (or using something like Expo/React Native) to create apps means doing more work—fewer niceties.
How much and what kind of analysis do you do on the data collected?
Without getting too much in the weeds, the data is eye fixations over a fixed duration while the pupil attempts to follow a moving object closely. Then, after recording and storing it, the software shows the image of the pupillary movements against the test pattern. Professionals can find this data highly useful as an additional tool to use in their expert assessments and interpretations.
What is your favorite part of the whole product development process?
My favorite is probably:
👉 When people first use a product you’ve built, it works as intended, and then they start thinking about how it can fit into their day-to-day work.
That’s a special moment.
My second favorite is probably:
🛠️ After you start something new, get the first working version out there, people try it and like it, and then offer you *actionable feedback*. Actionable and constructive feedback is gold. Absolute gold. Gimme.
How do you know which tasks to work in-house and what to out-source?
I haven’t yet learned of a perfect silver bullet approach to this, but here are some rules of thumb: don’t outsource core competencies, do ideation and prototyping in-house (whenever possible), buy your own 3D printers (if budget allows) for prototypes instead of paying for just-in-time services, and find trustworthy hardware & software partners for mfr & specialty jobs.
Missing from the above: edge cases, quite a bit of nuance, people matters, finance matters, business capabilities, etc.
What extra challenges are presented when you have both custom software and hardware?
Making software is hard.
Making hardware is hard.
Making useful products is harder.
It’s an added tier of difficulty applied to a company when it’s responsible for making both custom software and hardware. It’s remarkably hard.
But, as a result, I’ve been able to learn a bunch of different skills while guiding everything through ideation, R&D, production and manufacturing, launches, and everything in between. Those skills are probably helpful for my future endeavors. Maybe 😅
How can people find you elsewhere online?
It’s been a fun interview day, Ryan! Thanks for inviting me to be interviewed. Standing up in front of the class on career day was great fun.
If Threads had DMs right now, I’d say DM me here. Anyone who wants to connect privately can feel free to hit the “email” link in the footer on my site. FYI: new site dropping soon
👋 Bye, for now, everyone. Thanks for tuning in while I shared a bit!
Read the full interview on Threads: @ryan.swanstrom • Threads Dev Interview #34 with @ggamel I am finding developers on Threads and interviewing them, … • Threads