Task Manager

 

Creating a Task Manager

For Health Researchers to manage their participant relationships ensuring active study participation

 
 
 

Project Highlights

Opportunity

  • Increase user adoption and engagement

  • Create a central, streamlined tool for our users to track and report their daily research activities

  • Give users visibility into their individual and team bandwidth

  • Saving them time ensuring program efficacy

Action

Generative research and formative testing to identify:

  • user’s daily activities, prioritizing most impactful tasks and daily challenges

  • competitor offerings and workarounds

  • technical constraints

Designed in ~7 weeks

Outcome

  • Our users were able to view and task action on their most common daily activities increasing adoption and engagement

  • The All of Us Research Program committed to migrating all of their sites

  • We maintained our reputation as an innovative, reliable partner who delivers useful, user-centric products quickly.

 

Product
Research Cloud is a program management toolkit for health researchers to build and maintain relationships with participants using communication tools and dashboard reporting.

It has 25 modules that are used by 1,500+ health researchers from 17+ health studies across 62+ locations, servicing 600,000+ participants.

Primary Customer/Users
The All of Us Research Program Study Staff convert patients into participants within hospital systems and medical facilities. This tool helps staff manage participant relationships, ensuring active participant engagement with the program.

My Role
Discovery, user testing, design, strategy, requirements, and copy writing – while managing other enterprise designers.

I partnered with a Product Manager to ensure business alignment and timelines, an Engineering Lead to ensure feasibility, and I leveraged expertise from a Customer Success Manager to brainstorm and facilitate user testing.

 

 

Starting with discovery to identify opportunities, empathize with challenges, and define success

 
Screenshot of competitor analysis

Generative research: user interviews, journey mapping, competitor analysis

The secret sauce to understanding everyone’s needs and expectations so you can truly empathize with users to inform your priorities, scope, and definition of success.

Screenshot of one of the user survey responses used to prioritize tasks and scope project
 

My discovery process

  • Business and customer research: I met with Product Leadership to better understand business and customer goals. I watched advisory board meeting recordings to understand customer expectations and challenges.

  • User research: I met with a Customer Success Manager and collaborated on generative research to learn user sentiment and workflows, to prioritize what’s most important to users, and identify other tools they were using now so I knew which competitors to review

  • Competitor analysis: I created accounts with 5 other project management tools to collecting industry standards, potential user expectations, and areas of opportunity to differentiate ourselves

  • Best practice research: I scoured the web for project management best practices, reminded myself of heuristics, and patterns to leverage

  • User Journey mapping: I reviewed the research study’s primary task workflows to deeply understand what information our users have now, what they need to know, how to prioritize it, and what is possible from a technical perspective

  • Requirements: I met with product and engineering to understand and discuss requirements to prioritize our scope including affected roles, views, and technical limitations that impact this feature

 

Our why

  • Business Goals: We need to increase user adoption and engagement by empowering health researchers to build and maintain participant relationships. We must maintain our reputation as a reliable partner by creating scalable, user-centric products.

  • Customer Challenges: The program is using multiple inefficient project management tools across national sites, making it nearly impossible to track holistic staff effort and research program efficacy, and increases their security and compliance risks. Managing various tools also costing them in staff training time. They worry about their reputation as a secure place for participants to share their data.

  • Enterprise User Challenges: Research Staff aren’t meeting their recruitment targets, they are missing participant appointments, and forgetting to follow up. Staff Managers lack visibility into their team’s bandwidth and efforts, making progress reporting challenging. It’s also difficult for Staff to move between sites because they’re forced to learn new tooling.

  • Consumer User Challenges: Potential for participants to lose trust and interest in the program if staff sends incorrect or poorly timed communication. Imagine a participant who just donated bio-samples yesterday receives an email today requesting their bio-samples – this could raise concern that we are not handling their sensitive data with care.

 

Our definition of success

Increase user adoption and engagement by creating a centralized, streamlined tool for our enterprise users to track and report their most impactful daily research activities giving them visibility into their individual and team bandwidth, saving them time, and ensuring active participant engagement and program efficacy.

 

Our high-level scope

  • Scheduled tasks in scope: appointments, follow ups, 1:1 engagements

  • Administrative tasks in scope: case lists, custom tasks

  • Roles within scope: Program Managers, Campaign Managers, Communication Managers, CATI interviewers

  • Views: MVP will display team information, filtered by date, sites, and assignees so participants can see and be reassign activities between assignees and sites

  • Out of scope: Notifications, Calendar, and Communications activities like Segmentations and Campaigns, roles that do not have scheduled appointments

 

Next, moving into design, prototype testing, building, and reflecting on our efforts

 

My design process

  • Design: I started with mid-fidelity designs, utilizing the product’s design system, iterating based on internal team, stakeholder, and technical guidance

  • User testing: I created prototypes and partnered with a Customer Success Manager. We established test goals, created 3 google form surveys. She facilitated 2 focus groups. I collated our feedback to create an actionable list of changes to prioritize our scope. I iterated on the design to infuse test insights and performed peer reviews with internal resources

  • Build: I shared screens in Zeplin with our offshore engineering team and met regularly.

  • VQA: I performed Visual Quality Assurance checks to ensure parity between the designs and environment, until the project was complete.

  • Future: I began Phase 1 designs to set a vision for future enhancements

 

Our outcomes

  • Business: Increased user adoption and engagement across the platform as intended, and acquired new users

  • Customer: Saw as a reliable, innovative partner; validated by their commitment to migrate all of their sites to our product to centralize health study efforts and reporting

  • Users: Have visibility into their individual and team bandwidth, saving them time and ensuring timely communications to build quality participant relationships

  • Team: had a new collaborator and subject matter expert in our Customer Success Manager, growing our collective empathy, ensuring user-centric processes

  • Product: We were able to incorporate more design system components making our product more scalable. User testing insights offered suggestions for future work and clarified user expectations of our tool beyond this feature

Learning and reflection

  • Strategy: Tying business and customer goals allowed us to align towards strategic business outcomes which was exciting and fostered customer empathy team wide

  • Team: Due to the speed to implement, I drove most of the requirement decisions based on customer and user insights which was empowering; but, also made me appreciate my product partners for their scoping insights

  • Process: Regular meetings with engineering allowed us to get ahead of constraints during the design process so we were able to finalize designs while development was happening accelerating our time to market

 

I started by creating and testing 3 designs

 

User Testing Round 1

The core differentiator is the way the appointments are displayed since that was the primary use for this task manager. These designs purposefully showed all task types RC/PMT offers to indirectly force our users to tell us what is most important for MVP.

 

Option A (calendar): Offers our users a mini-calendar similar to the calendar they see in scheduling. Offers the ability to drag and drop tasks onto the calendar and see when peers are out of the office. Offers interactivity with the table to update progress, assignees, and due dates. The table offers an accordion to see a deeper dive into their call lists preventing them from opening the module directly.

Option B (kanban): Offers a kanban board approach, but with some interactivity; however, our users would need to navigate to the module to complete tasks. Appointments would always be positioned on the right and uniquely categorized to help with prioritization of those appointment tasks.

Option C (table kanban): Combines aspects of A and B where we showcase their appointments in a table format so they can quickly see lots of details while maintaining kanban framework for other tasks.

 

I iterated and tested again to validate design decisions

User Testing Round 2

View Prototype

We learned the researchers prefer the tables and managers were mixed, because some managers are both players and coaches. They preferred tables because they needed quick access to a lot of information and didn’t want to jump around to find it. We quickly iterated for this final round of testing to gauge if all of our assumptions of their feedback was accurate.

 

Leveraging the beloved table format, this design allows for editing in drawers so our users can see everything from here without going to each module. We kept features that were well reviewed, like, special notes for appointments. We also reduced the task types to custom tasks, appointments and follow ups, and case lists. We tested new terminology too.

Edit custom task drawer: instead of a popup, this provides more space for more information which was a common theme throughout testing.

View case list drawer: evaluating these drawers helped us understand how they interacted with the information to rearrange content to suite their needs.

Participant appointment drawer: leveraging and reformatting the existing scheduling workflow was an opportunity to also improve that feature.

 
Screenshot of user testing preparation notes
Screenshot of user testing feedback notes

Design and Formative research: user surveys and user testing

is the best way to validate your designs and continue prioritizing scope and changes

 

I iterated and finalized the design with engineering

MVP design

We delivered! Rave reviews on round 2 with mostly minor changes based on technical limitations that we knew may be a challenge but allowed us to plan for future phases without the need for another round of testing. It was unfortunate that we could only utilize the drawer functionality for Creating Custom Tasks so MVP takes our users directly to the modules. We believe this was a strong win for the team and our users.

 

Key changes were that we were only able to provide appointments, engagement follow ups, case lists, and completed tasks. We also had to adapt our filter functionality.

Task manager with in-line success alerts

Task manager if our users do not have any tasks available

Filters dropdown menus

 

I created future designs to keep momentum going

Post-MVP Phase 1

I got so excited about this tool, and designed for our potential next phase based on all of the user testing and feedback from internal peer reviews. Note this work is still in progress so there are several magenta areas where I am requesting product manager and or technical feedback because I’m introduced additional task types and features.

Combines time and details columns in the Scheduled tasks table with a tabbed experience for different types of scheduled tasks because our users voiced concerns that it was too cluttered instead of seeing the full day of activities at once. This version includes the drawers that were a favorite by all of our users and we added custom tasks back for their administrative needs. This also allows for appointment confirmations, another widely appreciated feature.

Showing engagement follow up tasks tab.

Shows appointment tasks that require outcomes. Outcomes are required to officially complete an appointment so that our system knows what transpired during the appointment.

Drawers for adding and editing custom tasks.

Drawers for adding and editing participant appointment tasks.

Drawers for adding and editing case list tasks. Case lists are 1:1 engagement lists where researchers contacts participants for retention activities.

Drawers for engagement follow up tasks that allow our users to input their engagement activities here.