Telesto.ai is a digital service in its early stage. The platform invites data scientists to dive into competitive machine learning challenges funded by corporate clients. Founder and CEO Tivadar came to us for help in testing the first versions of the website wireframes. Following the initial discussions about the product, we pivoted the testing project to validate the motivational and business assumptions behind the product as well. After understanding the background and previous experiences of data scientists and the business processes of several industries, we found favorable product and business model design opportunities.
As an input to the testing project, we had the wireframes of the website, which were covering the important functionality for the two end-user personas: data scientists and businessmen. We quickly formulated the test tasks, including machine learning challenge selection, model uploading, benchmarking and personal profile for the data scientists and competition posting for businessmen. In all of our projects, we like to do an extended preparation with strong collaboration with the customer so we can find the best ways to benefit the product or service. During this discussion with Tivadar, we decided to include another task, value proposition comprehension, which led to an explosive pivot for the testing project.
Value proposition comprehension is an associative task where testers skim through the landing page and brief us, the interviewers, on what they understand about the purpose of the website. To prepare for this task, we went over the value proposition of the service with Tivadar (following guided questions of the value proposition canvas) and found several hidden or untested assumptions about the pain points and motivations of the end-users. For example, a core mechanic of the service, competition prize structure was formulated on the well-founded but untested assumption that data scientists engage in such competitions for monetary rewards. Tivadar reflected on this:
We also found that the “businessmen side” of the value proposition was largely underexplored. The service was missing key insights about what evaluation criteria corporate decision makers use to assess the success of machine learning projects. With other business development information missing, we decided to create a list of exploratory questions and include them in the test agenda. Why did we do this? Testing should be about making the digital journey seamless, right? Why did we care about these human factors that play out BEFORE the website experience?
To be honest, testing a digital experience is a pretty boring, straightforward process: researchers tell testers WHAT they want to do and see HOW they do it. If they fail to do it, they make an action item to patch up the flow. Testing in its narrow definition doesn’t cover WHY the users want to do WHAT they do. Testers take on imposed motivation: they pretend to be interested in interacting with your service because the researcher tells them to be. We decided to explore the interests, goals and decision processes of testers to see what we can do to have each user arrive at the service by their own interest. Tivadar helped us see our service in a new light by saying:
Testing and interviewing
To test the wireframe and hidden assumptions behind the functionality, we invited 5 data scientists and 5 managers from different industries to a 60 minute session. We thought it important to involve both personas in the testing, as Tivadar was saying:
Word to the wise: if you do sampling correctly, some testers will be so engaged with the topic that they will stay for an additional half an hour. During this project we learnt to calculate with this time. To separate the two testing goals (wireframes and hidden assumptions), we split the sessions in half. We used a semi-structured script for testing the assumptions and well defined, structured tasks for testing the wireframes.
Tivadar is a CEO with a highly technical background (a mathematics PhD turned data scientist) but he has a high openness to the human side of his service. It’s always a good sign when we see an engaged client taking part in the design process but Tivadar helped us bring this co-creation to a whole new level. He took part in every interview, joining in on asking questions and validating the service. After this project, we will recommend doing the same for every early-stage startup client. Over and beyond the added value of unfiltered empathy with end-users, Tivadar saved himself 10-15 man-hours of billable time. Normally we do all our interviews with two researchers, he was able to replace one of them.
Analysis and delivery
During research, we always face a special trade-off between man-hours invested and information retention. Analysis and delivery is a process of distilling information: the full interview material with perfect transcripts would be 90 pages of full text. You take this material and distill it to 40 pages of insights and then do affinity diagramming to narrow it down to 10-20 pages of actionable insights that answer your research questions. This full-blown process needs to be trimmed and automated to save time without too big of an information loss. In this particular sprint, we saved billable time for our client in two ways. The first is simple: we decided not to go with a full transcript and transcribed the 200 insights from the recordings right away. The second is a bit more complicated, bear with me.
Affinity diagramming is in its most valuable format an analogue task: participants of the affinity diagramming workshop write the insights on post-its and cluster these post-its to create actionable insight categories. Physical space is used to visualize similarity and offers a way to use spatial memory to overcome the information overload (this effect can’t be replicated with virtual software). There are two problems with writing post-its by hand. Writing down 100-300 post-its is a tedious task that takes too much time and leads to additional information loss due to summarization. Our solution? Printing post-its with our internally developed software.
We overcame these problems by using our custom-developed excel-to-ppt software to print each of the 200 insights on separate post-its. We spend R&D time at Pine to develop software that sweeten the trade-off deals like the one outlined in the first paragraph.
Outcomes of the project
We managed to patch up the user experience of the wireframes based on the testing but the exploration provided invaluable insights for Telesto that allowed them to pivot in the right direction saving future money and effort. For example, we found out that data scientists are more intrinsically driven to solve machine learning competitions. Most of the users draw on their motivation to explore new technologies and trending domains and educate themselves rather than relying on the financial compensation of winning prizes. This led us to review the business model and reduce costs. We also found out a great deal about the corporate sales process and came to the conclusion that the sales process is a designable service process as with a high demand for personal touch points. Tivadar and Telesto.ai were able to lay out the steps of business strategy.
This research project was a good example of what we call “human back-end” design. Technology-heavy research teams often have the engineering capability to find out if a solution with all of its functionality will run on a preferred technology stack. However, will your “code” run on humans? A well-designed solution knows about the needs, motivations and thoughts about the end-users and responds to them in an empathetic way. It’s much like including motivations and other hidden human aspects in the technology stack.
Research is unfortunately often neglected as it normally requires a relevant effort and customers find it hard to challenge their own assumptions about their sector. This project once again taught us that there is no top-notch design without research and validation.