Logo
Home

How to avoid conflicts and delays in the AI development process (Part II)

In Part I of my two-part series on avoiding conflicts and delays in AI development, I introduced the first three steps of Intuit’s AI playbook. Today, I’ll walk through the last three steps. When put together, all six steps will help you integrate AI into your product, thus delivering improved customer experiences and customer benefits, improving your efficiency, accelerating your customer support, ensuring the security of your data and products, and more.

Last three steps to avoiding conflicts and delays in AI development

We were able to create Intuit’s comprehensive AI playbook with help from Intuit’s fraud prevention mission team and Intuit® AI team members. It provides us with a clear, step-by-step process (integrated into the wider planning and development cycle) for avoiding AI development conflicts and delays. The playbook allows us to align our teams and plan efficiently, and it can help your team do the same.

Before we look at the last three steps in the playbook, let’s do a quick, high-level review of the first three steps.

Step one follows an established intake process (led by the product manager (PM) of the mission team) in which  the internal customer defines the problem using the Intuit Design for Delight (D4D) template. The intake requires the input of key performance indicators (KPIs) and the impact of the problem using numbers, success metrics, and how to measure them. The impact estimation is used to prioritize the AI initiative against other initiatives.

Steps two and three involve setting up the specific mission team for the prioritized AI initiative. The team works on the solution design, design review/sign off, and quarterly commitment planning, where the wider mission team completes a capacity review (through a defined quarterly planning process) of all the prioritized requests and determines if the AI initiative makes the cut for the quarter. If it does, then it’s time to move on to step four.

4.  Quarterly project planning

Once an AI initiative makes the cut and becomes a commitment for the quarter, a kickoff meeting will need to be set up with all the relevant stakeholders, including PD (product developer), data scientist (DS), program manager/technical program manager (PM/TPM), and partner teams. Using the design, each PD team will break the requirements into tasks (Epic level) and documents these tasks into Jira stories

During this stage, which is estimated to take 1-2 weeks, it’s up to the project driver (usually the PM/TPM) to communicate the timelines and the definition of “done” to all relevant stakeholders.

5.  Project execution

It’s now time to execute the project. Project execution is made up of five stages:

5a: DS research and development (4-16 weeks). This stage involves data collection, exploration, experimentation, feature generation, and model development. The mission-based team would meet weekly to monitor progress and ensure that every change in design, scope, or timelines is communicated to all stakeholders. The DS communicates model results and progress, and reiterates the model/data based on policy feedback until achieving required performance. During this period, the DS team will experiment with different types of solutions and will work with PM/policy/analytics to evaluate their effectiveness. Keep in mind, this might change the research and development estimated time period. At the end of this stage, the DS team will share the model and feature details and performance evaluation with the mission team.

5b: Model implementation to silent release (2-7 weeks). In this stage, the MLE (machine learning engineer) collaborates with DS to develop production features, backfill the features with historical data, and finalize model production code. The PD integrates the call to the model from the product at the required checkpoint. Finally, required dashboards are built to fulfill model monitoring requirements.

5c: Model monitoring (ongoing). This is where we start monitoring model performance in production. It is important for the DS team to define standard monitoring requirements to make sure we are alerted in case of model performance deterioration or any malfunction. Main items we monitor include: score distribution, feature health, flag rate, performance [model defined business KPIs (analytics)], service level agreement, and timeout rate.

5d: Sign off process (at the end of the silent period, right before project release). During the silent period, before the action mode release, the PM initiates and completes predefined signoff process with all the relevant stakeholders based on the model performance analysis during the signoff period.

5e: Silent period, policy development, model release, and retrospective (4-12 weeks).

Policy development—After a silent period of 4-12 weeks (depends on label maturation period, policy decision and sample size), policy/PM/DS makes the analysis on the model scores and defines the strategy under which the model would go to action mode.

Model release—PD transitions the model to action mode, with DS, MLE, policy and analytics monitoring the process in the first few hours. A dedicated technical product manager would manage the release stages and provide a single point of contact to the release.

Retrospective—1-2 weeks after release, the driver of the mission team sets a retrospective to learn what worked well and what didn’t work well and gain insights for next projects.

6.  RTB – Model upkeep and incident management

We’ve finally reached the final step.

  • 6a: Model retraining (Ongoing according to model production performance). When the model deteriorates beyond a certain threshold (defined by the policy/PM/DS team at model release), the DS team should work to retrain the model in production, and PM/policy/DS should analyze the model results and update the policy/strategy as necessary. Phases 3-5 would repeat as necessary in the model retrain process for the new model version.
  • 6b: Incident management (approximately one week). Occasionally, we experience incidents including feature anomaly and/or platform timeout. RCA should be written in the case of an incident. If the problem is in the PD code, then PD should be the RCA drivers. If the problem is in the DS code, then DS are the RCA drivers. The analytical team needs to quantify the business impact.

Six steps to AI-development confidence

Now that you know Intuit’s six steps to avoiding conflicts and delays in AI development, I hope you’re feeling confident that you and your team can efficiently integrate AI solutions into your product. It is by no means an easy or quick process, but it is a very doable one if you define your team’s roles and responsibilities, follow the six steps, and, as you learn what works and doesn’t work for you, make improvements.

I’ve learned many lessons leading AI teams, and I expect that you’ll learn quite a few yourself as you start your own journey. With AI redefining apps, you’re well on your way to being ahead of other businesses that have yet to consider utilizing AI.


Posted

in

,

by