Logo
Home

How to avoid conflicts and delays in the AI development process (Part I)

The adoption of artificial intelligence (AI) by organizations of all sizes is slowly but surely taking place. Forbes, citing an IBM study, said that 34% of U.S., EU, and China businesses have deployed AI, and a Gartner survey of 3,000 CIO respondents from around the world found that 37% are using the technology. If your business numbers are in these percentages, then you already know that new technology can bring new issues, including conflicts and delays which, in many cases, stem from the lack of awareness of the unique nature of the development cycle of AI products, and how to plan and collaborate through it. 

Today, I’m sharing Intuit’s AI playbook, which drives alignment and efficiency by defining team roles and responsibilities, from inception to production.

Intuit’s tried and true method for avoiding conflicts and delays in AI development

AI provides businesses in every industry with exciting benefits, such as improving efficiency, ensuring data security, and optimizing business processes. However, there’s always a learning curve with new technology.

In the last few years, Intuit® has infused AI solutions into its products. Based on our experience, we have found that creating a well-known process that is integrated into the wider planning and development cycles, allows us to: (1) avoid conflicts by creating alignment between the teams contributing to the AI initiative, and (2) avoid delays by planning more efficiently.

Your business should have a process in place that will help you and your team develop and integrate AI solutions into your product efficiently. This may sound easier said than done. That’s why I’m giving you the first three steps in Intuit’s AI playbook.

1.  Problem identification and prioritization

In order to deliver AI solutions, AI teams work collaboratively within a wider mission team that include product developer (PD) teams, product managers (PMs), AI teams, data analytics, and a policy team for fraud/credit use cases.

The groups in the wider mission team perform an internal quarterly planning process, led by the PM, in which internal customers can submit problems or requests through an established Intake process.

The intake form requires the internal customer to define the customer problem according to the Intuit Design for Delight (D4D) customer problem template:

<Customer> is trying to gain/do this <benefit> but is unable to/hindered because of <problem>. Intuit/Intuit AI will help deliver this <benefit> by <how improvement achieved>, which will lead to <improvement from value, to value>, which will be delivered by <delivery date> in support of input goal XX.

The template is just the start. Unlike other companies, we also find out what the root cause of the problem is, and how that makes the customer feel. This provides us with a specific starting point and spurs innovation.

In the intake form, the customer is asked to define the key performance indicators (KPIs) and the impact of the problem. Specifically, this includes the problem scope in numbers, success metrics, and how we’ll measure them.

The expectation is that if a stakeholder requests to develop a machine learning model to solve a problem, other more simple rule-based solutions were tried in previous experiments to solve the problem. However, we have reached an understanding that more advanced solutions that leverage historical data would be more effective.

Step one is complete when the PM and analytics leads review and prioritize the initiatives based on agreed upon criteria. 

2.  Solution design + design review and signoff

Step two begins with setting up the specific mission team for the prioritized AI initiative. The team can include multiple roles, including the following:

  • Data scientist/s – developing the AI solution.
  • Machine learning engineer/s – bringing the AI solution to production (in collaboration with the data scientists).
  • Product developer/s (PD) – integrating the AI solution into the product.
  • Data analytics – locating the relevant data sources, estimating the potential impact, building the dashboards to reflect the solution impact, possibly the domain expert.
  • PM/TPM (product/technical program manager) – defining the solution requirements and possibly driving the mission team, possibly the domain expert.
  • Policy (for fraud/credit related initiatives) – defining the solution requirements, possibly driving the mission team and defining the final policy based on the model results and the business needs, possibly the domain expert.

Within this team, there needs to be the Driver. They (usually the PM/TPM/policy) are the main point of contact, and in charge of alignment and communication. Their duties include scheduling mission team weekly meetings throughout the entire development cycle, sending meeting summaries, opening a Wiki page, building and maintaining the master deck, communicating with relevant stakeholders, and allocating resources based on the project requirements and needs while working with the relevant group leaders.

There should also be at least one domain expert (PM/analytics/policy) which is closely familiar with the problem space and the relevant data sources. The domain expert would work closely with the data scientists during the project execution stage (stage 5, which will be explained in part II of this blog) to provide feedback and evaluate the appropriate solution based on the requirements.

Once the team is in place (something that can take approximately 1-2 weeks), we move to data collection and exploration, as well as developing an initial benchmark model that will help us estimate the potential range of impact. This portion of the process is done in conjunction with the solution design and review process.

The solution design process includes setting focused meetings with small teams according to their subject matter expertise:

  • Solution Architecture, including data pipelines, RT/batch, checkpoint, action (Architect + PD + PM + MLE + DE + policy).
  • Label and dataset requirements and signoff (Policy + analytics + DS + DE).
  • Partner team dependencies (policy + DS + PD + PM + MLE).
  • Timelines (Roadmap) (PM + PD + MLE).
  • Create benchmark model to give potential range of impact and potential risks (DS + policy + PD).
  • Evaluate Legal, Compliance & Security implications.
  • Final design approval, including dataset and labels signoff.
  • Estimated timelines.

Step two ends with the design review and go/no go decision. The leaders review the design after it is complete. Modifications to the design and roadmap occur based on the leadership feedback (if case architecture is not trivial, architecture approval is required by the PD lead/architect).

We then decide on go/no go based on the proposed design, timeline, potential impact, and estimated risks.

3.  Quarterly commitment planning

Taking anywhere from two-to-five weeks, step three has the wider mission team, through the defined quarterly planning process, completing capacity review of all of the prioritized requests and determining if the AI initiative makes the cut for that quarter.

AI development and avoiding conflicts and delays requires a detailed process

As you can see, handling AI development properly and avoiding conflicts and delays isn’t a fly-by-the-seat-of-your-pants process. Here at Intuit, we ensure that team roles and responsibilities are clearly defined during each step while also subjecting the process to improvements as we find what works and what doesn’t.

Stay tuned; I will share the last three steps in our AI playbook that help us efficiently deliver AI solutions to production and can help you do the same. In the meantime, check out some of the lessons I’ve learned leading AI teams at Intuit.

Thanks to all of our awesome partners in the fraud prevention mission team who worked with us to develop this playbook and integrate it into the wider process, and to the awesome Intuit AI team members who contributed their insights and thoughts into this playbook.


Posted

in

by