AI for Enterprise
By Jeremy Fajardo
Technology’s ubiquity is difficult to ignore these days and analytics — one of its key enablers — is an industry agnostic driver whose impact is probably best characterized by its explosive growth (and fair share of hype). Despite that ubiquity (or maybe because of it), trying to grasp the full scope of the latest trends, pitfalls and insights feels like an impossible task. In May last year, our team ran a conference to try and get a pulse on the industry from as many perspectives as we could fit into a single day.
Below are three key topics that arose and are especially relevant to practitioners at the enterprise level:
Data Science Vendors
Quick-Wins and Technical Debt
Data science as a platform and service
The icon of Toronto’s garment district. (Photo source)
In Toronto’s garment district, the AI firm Dessa occupies a refurbished warehouse where cofounder Stephen Piron’s grandmother used to work as a seamstress during the early days of the industrial revolution. It was a time where machines were being leveraged to augment the way humans worked and had a profound impact on society and the way we lived — echoing our current digital age with the applications of ML and AI.
The portfolio of Dessa and AI firms like it, showcase the breadth of possibilities analytics can have in multiple industries. At the ABD 2018 conference, Piron highlighted the blueprinting process his team follows and the “Dragon’s Den” style pitches they employ to filter ideas for AI applications internally. An engineer at heart, he noted the shortcomings of open-source out-of-the-box solutions and the subsequent pivot his team made towards building a software platform for enterprise AI (aptly named, Foundations).
The SaaS model is a common one for many AI firms as it provides an onboarding mechanism for enterprises to quick start AI projects with clients but Piron is well aware that advanced analytics cannot be sold as a turnkey solution.
Many enterprise companies do not have this ideal infrastructure and lack the resources to build it quickly. This reality is what makes the service and product approach attractive for small AI firms to pitch. By simplifying the integration, companies like Dessa or, fellow Toronto firm, Rubikloud can achieve a small amount of lock-in when onboarding an enterprise client.
And that lock-in — no matter how small — will make the race for market penetration interesting in the next 1–2 years. Rubikloud’s approach of focusing on a single industry (retail) is one way to get ahead. By ceding other opportunities (ie. the financial sector) to its competition, they can focus their solutions and their brand in one sector.
There will also be an interesting push and pull as enterprises themselves try to balance which projects to dole out to the AI firms and which ones to keep in their back pockets for their own growing data science teams. For enterprises more mature in their analytics journey, the movements and solutions offered by AI startups can provide a benchmark for up-to-date applications.
“Command Centre” is not a term many companies get to use as a descriptor of a room in their workplace.
In Panel 1 of ABD 2018, Dan Zikovitz @ GE recounted the building of the Humber River Hospital’s “data wall” as a means of operationalizing the hospital’s investments in advanced analytics. It is an opaque but extremely comprehensive UI for a plethora of analytical engines from multi-agent simulation models and several algorithms for multi-layered capacity planning problems. Here’s how it works in their own words:
Is the dashboard to end all dashboards the ultimate end-game of analytics?Eric Bogart of Acosta describes his firms’ attitude to dashboard-ing in a single word: allergic.
“Data visualization is a way to inspect the decisions we’re making at scale and communicate our value. But broad scale visualization is something that will consume more than it produces.” — Eric Bogart @ ABD 2018
Naturally, Bogart occupies a different space than HRH (retail not healthcare) where analytics can play a much different role based on the specific use case. That being said, it raises some interesting ideas about the strategy of knowing when and how to utilize data visualization.
Data viz is a staple part of many data analysts’ day-to-day activity — whether by habit or mandate. If inspecting a chart you made would lead you to make the same decision you were going to make anyway, why make the chart?
This line of thinking is really a question of efficiency. It’s not that the chart has no value — it’s that the time spent making it was traded for time spent analyzing something else. Human capital is expensive, so what if that chart was automated?
Yes, it’s a small thing. But a single glance at a header of distributions can give a team of analysts a useful at-a-glance view of something they manipulate every day. Enterprise organizations are best poised to benefit from these tiny design efficiencies at scale. With good data science talent being so expensive, visualization as tooling is one small way to make sure the experts you’re hiring are doing what you’re paying them to do.
Consider what kind of interaction best suits the end user of a built and trained analytical process. An interactive dashboard? Or a single push notification to the key stakeholder? Ideal design of an analytics solution considers the process from the data generating mechanism all the way to the ultimate end user(s).
End-to-end design is a luxury
Enterprise analytics as a process, might look something like this:
Data Generating Process
Cleaning / Feature Engineering
Being able to design and optimize every stage of this workflow is a luxury that enterprise organizations don’t always have. Additionally, every stage can look different for every company and every stage has has its own pitfalls. Here’s two that we’ve heard come up several times:
Attributing a failure to derive an impactful decision at the end as the sole failure of the analytics engine.
Have a clear measurement strategy for your approach and a strong preliminary understanding of your data.
Conduct a thorough data exploration — not only on the data itself, but also audit the process that generates it. Outline all the caveats about the process with respect to the inferences you’re hoping to make once the model is in production. This will help guide your measurement strategy and most importantly, help you identify the minimum amount of value you can get.
In other words, is even the “dumbest” version of your model, still providing some amount of incremental value?
Data cleanliness may be mitigated by data engineering efforts but data availability is a business and strategy problem.
Data enrichment as a strategic consideration when making business decisions is often not something large enterprises are actively doing (yet). In fact, many organizations struggle to find value in the data they do have.
What are the other processes adjacent to or related to the one which you are sourcing your data? Can you access those?
For product-driven companies: how can you design features / encourage new forms of interaction to get more robust info from your user base?
Parting thoughts on technical debt in ML processes
There is one last thing we hear often within the discussions of analytics for enterprise. That is the notion of the quick win project to help kickstart analytics. We’ll be diving deeper into this topic with future articles. But for now, we just want to highlight one concept: technical debt.
For those not familiar with the software engineering concept, it’s relevance to ML and AI is important. The idea is that agility in engineering often comes at the cost of technical debt. Speed now, begets future (more painful) work. There are strategies and frameworks in software engineering to mitigate technical debt but many of them don’t account for the host of ML/AI specific debt that comes from advanced analytics processes.
If you’re an analytics professional in a large enterprise trying to secure buy-in for future analytics investment, be aware of moving too quickly or taking your quick-win project farther than it needs to (or should go). It’s perfectly ok if your companies first foray into advanced analytics gets scrapped before seeing widespread deployment. Even if you have to go back to the drawing board, if you’re doing so with better and proper resources than when you started that first project, that’s really a win.
For curious readers, we highly recommend the paper “Hidden Technical Debt in Machine Learning Systems” which serves as an excellent primer into the details of what makes technical debt exponentially tricky with ML.