webinar

Unlocking the Power of Data for Transformational Insights

Industry experts Adam Wilson (Alteryx Analytics Cloud) and Anthony Deighton (Tamr) discuss what’s new with data consumption and decision making and how people can really take advantage of better data.

For more information, please view our privacy policy.

Oops! Something went wrong while submitting the form.

For more information, please view our privacy policy.

Oops! Something went wrong while submitting the form.

Why data products? Why now? Why does democratization matter?

  • Define the essence of a stellar data product
  • Craft a winning data product strategy, culturally and technologically
  • Learn from real-world examples of how to drive transformative change

Why data products? Why now? Why does democratization matter?

  • Define the essence of a stellar data product
  • Craft a winning data product strategy, culturally and technologically
  • Learn from real-world examples of how to drive transformative change
 
No items found.

Want to read the transcript? Dive right in.

Anthony Deighton: 

Alright, why don't we get started? Thank you everyone for joining us today on this conversation about data products. I'm joined today by Adam Wilson from Alteryx. I'm Anthony Deighton, I run the data products business here at Tamr. Adam, a long time friend, maybe take a second to introduce yourself.

Adam Wilson - 00:00:30:

Sure, absolutely. Thanks, Anthony, for having me. It's a pleasure to be here and this is a topic that's near and dear to my heart. So, always love, riffing with you a little bit on what's going on in and around data and then specifically data products.

So as Anthony mentioned, my name is Adam Wilson. I'm the GM for the Alteryx Analytics Cloud, and formally the CEO of Trifacta. And so I spent my entire career in data integration, data transformation and data cleansing and it's a fun time to be doing data things right now.

Anthony Deighton - 00:01:06: 

Yes. I couldn't agree more and I think this topic of data products is a particularly timely one. This is clearly a new sort of idea, and burgeoning space, and I think it really brings together the best, what Alteryx offers in terms of how we optimize data consumption, outcomes and decision making, which I think you're going to spend quite a bit of time talking about. And thinking about, you know, how we get clean, curated, continuously updated sources of data.

Over the course of this, conversation and presentation. If you have questions and I hope you do, please use the chat panel that's located in the Zoom toolbar, to go ahead and ask those questions.One of the benefits of that chat panel of course, is that you will see everybody else's questions as well. And then at the end of the sort of presentation portion, we'll pick up those questions and do our best to answer as many of them as possible. Of course, if there's anything else that comes up, feel free to use the chat panel, we'll be able to see it there.

So with that out of the way, Adam, it would be great if you can start us off and really talk about once going on in terms of data consumption, decision-making and outcomes and how people can really take advantage of better data.

Adam Wilson - 00:02:24:

Great. Yeah, thanks, Anthony. So I thought I'd maybe just set the stage a little bit, you know, for this discussion.  I won't come in and rattle off lots of statistics about why data transformation, digital transformation, data modernization, why these things are important. I think if you're on this webinar you probably understand that intuitively or have been bombarded with that over the course of the last several years, especially at this moment in time when everyone is in a more with less moment and trying to figure out how to use technology to get there.

But I think, for me, what's kind of the interesting thing is that while there's a lot of heat and a lot of excitement around this topic, generally people feel like things are going too slow. And you know, even the analytics teams feel like they're really tied up with a lot of low-level data janitorial work and are ultimately frustrated by that. So we have a situation where I think the desire to mature analytically, the desire to transform digitally, is probably greater than it's ever been. And so, you know, that for those of us who've been in the space a long time is, you know, is great for what that means for the impact that we can have in our careers, but ultimately it’s also proving to be challenging in reality, in terms of how people are achieving that.

And so, you know, I think what we've seen is that historically, for most companies they you know they have a small number of highly analytic individuals, maybe they’re data scientists or data engineers, advanced developers that are doing a lot of this work. And then there's everyone else. There's accountants, there's tax professionals, there's marketing analysts, there's engineers that have questions every day. What's my attrition rate? How are sales moving this week? Is our inventory optimized? And clearly there's not enough data scientists to handle all these requests.

So what we find is that, ultimately some then say, “well, gee, okay, well, if that's the case, if there's not enough of these people'' - and by the way, the data scientists we have are also preoccupied with other projects - a lot of people will say, “well, gee, maybe we can fix this by simply providing the knowledge workers with the training to become data scientists. And we'll teach them all Python or we'll have them use the same tools and technologies that the data scientists use and that's how we're gonna get there.”

And our customers have really told us that this is problematic on 2 fronts. One is that If you're in the line of business and you're an accountant as an example, you may not want to be a data scientist. That's not necessarily your chosen area of expertise. You want to just be able to get to the data that you need to make the decisions to support your specific role within the company.

And a lot of times the other problem is that the tools that data scientists use really aren't suited for knowledge workers, which results in a kind of low saturation rate and little to no adoption and leads to you know quite a bit of frustration.

And so, I think what we then see is that in the end companies have a choice to make about how much effort, resources and dollars are they going to invest in democratizing analytics, versus the tools and projects that the central data science team will take on. There's a trade off there, but there's also a way to think about sort of balancing some of this. So while data scientists will likely take on the most complex and important challenges and save, hopefully save or create very large piles of money related to those big bet projects. There will likely only be a few of those big bet projects as your company won't have thousands of data scientists, but the nature of big bets, they'll probably have a very limited number of those.

Meanwhile, the analysts across the company can work on smaller projects that are likely saving or creating smaller amounts of money. But there are thousands of them in many cases. And the value that is created, you know, eventually could be even larger than the big piles that are created on the left here. And so as you continue to upscale folks, you know on the right the piles will continue to grow. So this plays out in big and small ways each and every day across a really large set of people within the organization. And the reality is that by upskilling the domain experts, better questions and better projects will come in and enable the data science team to save more money as well. The functional domain experts that know where the gold is buried and really understand how the business works and what are the key questions will make it easier for everyone to collaborate and to implement solutions. And so, as the teachers, the data scientists will become trusted advisors and we'll have a seat at the table as the domain experts are working on their efforts, even as it democratizes across the team. 

And so, what we see is that best in class, companies that are accelerating progress and analytics maturity are investing purposefully in both sides. If you invest 90% of your resources on the right hand or 90% on the left you simply won't get the benefits as a company that is more balanced. And whether it's 50/50 or 60/40 depends on the company and where you are in your analytic journey, but balance is certainly key. And what we see is that, you need to really think about balancing these investments and maybe over investing in the left or right side of this model will generally result in a lower level of benefit than expected. 

And so while there's no set answer, the distribution really depends on the company's situation, investments that have already been made and where they are.But you know, we really think that, this is a moment in time to, again, really think in a very deliberate way, again, really think in a very deliberate way about, about where some of these investments are going and where you will ultimately see ROI. And this has, in our experience, really played out to great benefit for organizations that have embraced this idea and that you know we're not just saying like let's let the people who know the data best do the work.But we're also, you know, complimenting that by saying, how do we make sure that those who have deep expertise in data, you know, are able to collaborate with those individuals and support them in a more meaningful way.

00:09:41

And so, the example that I'm highlighting here is really about the success of this balanced approach. Where in 18 months you see massive, transformative results that were achieved across a consulting company with over a100,000 users in both top line and bottom line impact. Through getting more people involved in this where they can create their own data products and they can start to do more of this work on their own, so this doesn't become just the exclusive purview of the highly technical. And these are the types of impacts that we're seeing now starting to play out across most organizations that are really looking at this very strategically. 

Soa lot of times then, the next question is like, “Okay, well, fine, but how do we get started?” And while we want to see a large percentage of knowledge workers and process impacted, the path to get there has to take into account the actual process or journey of each of the individuals involved. Our experience in helping democratize analytics is really shown that the journey for a knowledge worker is very consistent and follows these steps.

00:11:02

First people learn by, or start learning by, how to wrangle the data, in more advanced ways that are more efficient than just doing things inside of spreadsheets, which is probably what they were doing before. Often the data starts to get larger and more complicated. Then they start thinking about automation. So how do I automate analytics and start to help the machines do more of the work? And as we sort of have this, you know, new approaches to things around AI and ML and generative the idea of automation, comes into focus more and more for most organizations. It's important to note that when they automate an analytic process, they typically aren't changing the process. They're simply automating what's there today. And that's great. But it's not necessarily a transformation yet. It's more wringing a lot of manual effort and cost out of things and sort of eliminating manual tasks and allowing people to move on to sort of higher order work. And then, as you learn some analytics - likely not full modeling, but simple rules, if then, kinds of rules and other conditional logic - then, you start to apply some of those capabilities and then eventually you get to a point where you can start to really think about re-engineering the process as much as just, automating it and analyzing it. And this for us, is really where you start to see sort of increasing returns to scale as you go down this journey. 

I think that many times the idea that you want to go down this journey then sort of causes everyone to step back and say, “Well, okay, they're I'm not the first one to do it. There must be best practices around this.” And certainly there are. Having done this for decades, we've done quite a bit of research and work to curate and refine core best practices for effectively driving analytic upscaling within an organization. And you'll likely have some of these in place yourselves today. That wouldn't be a surprise. But to be truly successful at something you really need strong leadership and a clear plan that builds on many of these approaches and combines many of these tactics.

And really, if this is something that's of interest, most organizations that we work with typically have an analytics or innovation program where they're very, intentional about combining many of these tactics to deliver the upscaling across the organization. And this is something we've obviously spent a lot of time working with customers on and and in some cases we’ve collaborated with Tamr on as well. So if that's of interest to you, I’m certainly happy to tell you more. 

But with that, hopefully that gives everyone a little bit of a sort of a backdrop in terms of, you know, some context. This is sort of: why data products? Why now? Why democratization matters in this overall exercise. And, with that, I'd love to hand it back to Anthony to talk a little bit about the data product specific piece of this and where you go from here.

Anthony Deighton - 00:14:29:

Thanks for that, Adam. I think that's a very compelling story around how we see consumption endpoints shifting and changing, frankly, just the maturity of those. I'll talk about that in a second. You know, let's talk a little bit of, as you said, I'm going to talk a little bit about data products more generally and how we see that strategy connecting with what you started us with.

So the first thing to recognize is the data space itself has changed a lot over the last 10 years. In particular, the challenge used to be big data. It used to be storing and managing large quantities of data. That itself was sort of a fundamental problem that organizations startled with. And that's largely a solved problem at this point. Between inexpensive cloud storage and cloud data warehouses, the idea of storing and managing large quantities of data is largely a solved problem.

The idea that we're gonna put all of that in one place is not ever happening. That's a dream we should let go of. Because, data tends to be organized in the way that the organization that created it is organized. So if you've organized your business by product line then you tend to organize your data by product line. If you organize your business by geography, you tend to have geographic silos of data.

As Adam indicated, the consumption endpoints have never been better. So it is a wonderful time to be a data scientist and an analyst and consumer of data. These tools that exist are just phenomenal, you really are in position to be able to do really sophisticated analysis really simply. And in particular, and I saw a couple of questions asking about this, but in particular around AI and ML. So the ability for anyone on this call or anyone on this calls team members to be able to go in and take data and build a predictive model around it, that is truly much easier today than it has ever been.

And very much to your point, Adam, it's never been more important. Enabling and empowering everyone in the organization with these data consumption endpoints is the mechanism to achieve great ROI.

00:17:10

So we have this sort of interesting duality today. Storing and managing data has never been cheaper and easier. And it's never been easier to get value and ROI out of that data. But then how come everyone on this call would largely agree that when we do this analysis the question that remains is about the data itself? People look at the results in these analyses and they say, “Wait a second, how come there's 3 instances of Microsoft? I know there's only one Microsoft in the world, why is there 3 in the data?” Or they look at it and say, “How come that data is missing an address? Like, why do we not remember the address of that customer? That can't be right.” Or they'll say, “This is really compelling, but did you include the data from Europe in this? You didn't include the data from Europe. Okay, well, now I have to go back to the drawing board and understand how to do that.”

And not for lack of data management tools. I think the great travesty in the space today is that there is just an enormous amount of software available to organizations that are interested in sort of becoming, you know, knitting together this set of tools to try to solve this problem. It's really a smorgasbord, a buffet of the tools available to everyone to try to solve this problem.

And I liken this very much to the situation that existed in the 1990 for application software. In the 90s - if anyone on the call was around then - if you had the problem of creating a system for people to manage accounts, opportunities, and contacts, your solution to that problem was to procure an Oracle database and a fine GL programming language and a whole other set of software and then you build a system for doing that. Then along came Siebel and Salesforce and Peoplesoft and you know a whole set of application software companies and said, “This is silly. We know a system to manage it by example, accounts, contacts and opportunities. We're just gonna build that and deliver it to you as a CRM application.”

And so part of the idea behind a data product strategy is to employ a similar set of logic, but to the data management space. And the simple idea is, “What if we could have an application approach to data? If we could build clean, curated, and continuously updated versions of the key entities that matter in our organization then we could deliver these as products, not as a set of tools that somebody has to knit together. So we would deliver data around customers, suppliers, companies, products, parts, locations, and employees and every entity that mattered in your enterprise would be delivered as a packaged product that gave you the best cleanest, most curated, most up-to-date version of those key data that mattered to you as an organization. And what that might do is allow you to introduce a new set of capabilities on top of that data product. So instead of thinking about knitting together a set of tools, by packaging this data and delivering it in this form it allows you to open up new capabilities to your enterprise. So allowing people, for example, to comment on data or to provide feedback on that data. Say, you know, “this data is wrong” or “this data is really helpful”. Or find who the experts in your organization are in a specific domain of data. 

Or think about how you bring data together across silos or improve the quality of it or move in or integrate third-party sources into these data products. So by focusing energy around these key entities and delivering those key entities in a packaged way, it sort of opens this possibility of new capabilities.

And it's important to think about this not as a set of tooling. So this isn't yet another set of tools in the data management space, god forbid. Really be thinking about this around these domain driven, industry specific, entity specific domains. So thinking about it from the perspective ofB2B customers or B2C customers, contacts, suppliers, patients, you know, providers if you're in the healthcare space. Thinking about it not from the perspective of the source, but thinking about it from the perspective of these consumption endpoints around the specific verticalized domain. And in doing that, you could think about delivering this as an app store for data. So the vision here is if you employ a data product approach that you should be in a position to deliver to your organization, again, the key entities that matter. They should really think about all of the data associated with a key entity and they should be dynamic and updated continuously behind the scenes, without human sort of intervention and really focused on the consumption endpoints. So thinking about how we deliver this data to fantastic tools, like Alteryx. 

When you think about these key entities that matter in your organization, it enables a set of use cases from a user perspective. So, they are naturally consumption oriented. It gives people a mechanism for jumping off into these analytic experiences. And also give the organization a view into how that data is changing. So when new sources are added, the records get added, new attributes show up. And in doing that when you look at the details of any particular entity, it should give you a view on all of the data in that entity that you have been able to bring together and deliver to decision makers. And give people visibility into how that data is changing over time. And you should be able to go into a specific instance of, and like an example, screenshot a customer, but it could be a supplier, it'd be an employee, that could be anything contact - and give you a 360 view of that entity. So it’s beautiful, visually engaging and that you see all of the data that we have as an organization around that key entity and provide feedback and updates to that data. So if you see an error, this should be the mechanism by which you alert somebody who is a problem in the data.

00:24:21

So this is a very different approach to thinking about how we deliver data into these consumption endpoints, then we've had historically. Historically, sort of the traditional approach, often thought about as MDM is this top down master data approach, typically built on the back of hundreds of rules that try to impose order on sources of data. Our view is that this is a sort of fundamentally flawed approach.

There will never be a scenario in which you can build enough rules and enough human curation to create clean, curated, and updated views of these key entities that matter. Maybe, maybe if you invest huge amounts of time and energy, you might be able to do it for a single entity. A simple entity like for example, you know the companies that do business with. In one particular domain, mainly.

Our view is that the technology innovation that's broken this market open and allowed for a new disruptive approach is AI. So the ability to train a machine to look at data and do what humans are so capable of doing - you clean that data and bring it together - allows you to take a new approach, this data product approach, where we can focus on key domains rather than focusing on sources. Then we can do things quickly, significantly less expensively, and you can empower domain experts in the organization to focus their energy where the machine learning is having trouble. Again, built on the back of an AI-driven, machine learning approach to solving this problem. 

And what this enables is new roles in your organization. So being able to think about empowering data product managers. Like you might have product managers in your software company, you could have data product managers whose job and role is to look at these key entities and manage the experience the decision makers have with that data, and empower them to improve that data over time. Very much like you would expect a product manager to engage with customers, hear feedback and improve your product, data product managers work with decision makers to understand where the gaps are. Learn where they’re missing sources, missing attributes, where the data quality isn't good and give them a set of tooling to be able to iterate to improve that and work in an agile manner.

00:26:55

So, you know, that's very much what Tamr’s focus is: how to build a framework for doing that and delivering these data products to organizations. And again, from Adam and my perspective, our view here is that if you bring together a world-class consumption endpoint like Altreyx - which really allows you to get better outcomes out of your decision making - and you could connect that back to the best data you have in your organization, that really what you can produce is better outcomes.

Adam, I'm sure you have a point of view on how these two approaches work together.

Adam Wilson - 00:27:36

Yeah, and I think that part of the “aha” here is that people have woken up to the fact that if they've got data quality challenges with their data products, then their machine learning, their AI, their analytics may be worthless. And it's creating a bit of a burning platform to really think about these approaches to building data products and really kind of continuously curating them. And then at the same time marrying that to this idea that we'd love to unleash this on a much larger set of individuals who are increasingly being told, Hey, be more data driven” and they're like, “Great, well if you can kind of give me the data in a form that I can use it, I'm happy to be. But don't tell me to be more data driven and then tell me I have to wait 6 months for a table to be added to a data warehouse so I can answer a question.” And I think it's really about increasing the velocity with which people can build out - take these canonical forms and build out data products based on them - and then ultimately take action on the insights that they can come up with.

Anthony Deighton - 00:28:57

So with that, I know there have been quite a few questions showing up in the chat panel and please don't stop, feel free to put some more in there. Behind the scenes Katie has been feverishly reading these questions and trying to bring them together and at the risk of putting you on the spot Katie, any questions that you wanna toss at Adam and I?

Question & Answer Session: Listen Live starting at 00:29:19