Defining and Crafting the Future of Data Science 

Liberty Vittert, Washington University St. Louis

Data science is a constantly changing, constantly debated, and constantly confusing discipline. Given the "newness" of this discipline, significant challenges are created in defining it for others, for ourselves, and in understanding what its future will be. Now is the time to craft the future of data science, but how do we do that? Starting with how we could define data science, what led to that definition, and where we see our place in it for the future will be the overarching ideas that we will touch on together. 

The Impact of the Gig-Economy on Financial Hardship among Low Income Families

Kaitlin Daniels, Washington University in St. Louis

Problem Definition: New work arrangements coordinated by gig-economy platforms offer workers discretion over their work schedules at the expense of traditional worker protections. We empirically measure the impact of expanding access to gigs on worker financial health, with a focus on low- and moderate-income (LMI) families.

Academic/Practical Relevance: Understanding the welfare implication of access to gigs informs workers considering working gigs and regulators empowered to protect them. Additionally, firms who rely on this working arrangement may find themselves exposed to increased worker turnover and regulatory intervention if gigs negatively impact worker financial health.

Methodology: We analyze a novel data set documenting the financial health of a sample of LMI families. We are interested in the likelihood that a family experiences hardship, meaning they fail to pay their bills on time. We leverage the sequential launch of Uber's UberX service across locations to identify the impact of the associated increase in access to gigs on hardship via a difference-in-differences design. The granularity of our data allows exploration of possible mechanisms for our results.

Results: We find that UberX increases hardship among the LMI population, primarily by decreasing overall take-home pay (i.e. annual income less expenses). This is despite a corresponding reduction in income volatility, generally a boon to LMI families who have insufficient savings to weather unexpected dips in earnings.

 

Managerial Implications: These results caution that gigs can be harmful to the most vulnerable members of society. Our analysis of antecedents of this result offers guidance for effective mechanisms for improving worker financial health in the presence of gigs. Further, we find that gigs offer potential benefits to the LMI population through reduction in income volatility.

How AI Sight and Sound Improve Safety and Production Quality in Manufacturing

 

Kay Apperson, Microsoft

In this talk, Kay Apperson will cover AI in the cognitive space – in particular, vision (sight) and acoustic (sound). As chapter 2 of her talk at prepare.ai, she’ll recap the AI fiber optic manufacturing work in collaboration with MIT’s Department of Mechanical Engineering to serve as a foundation for this talk. In addition, she’ll present real-world vision AI use cases she’s worked closely with global and US manufacturing organizations in two aspects: 1.) using vision AI to increase safety and be compliant with OSHA regulations, and 2.) using vision AI to improve assembly, process and product quality with just-in-time defect detection. She will report the latest progress of her AI “pet project” that utilizes acoustic signals for predictive maintenance. The acoustic data from machines is one of the most reliable signals that could determine the useful life of a machine. Especially when consider the need to implement an AI system at scale, using acoustic data instead of images may likely reduce the system implementation costs when comparing between a system that requires a microphone to ingest data vs many cameras to gain a full 360-degree view.

Applying Operations Research in Trait Introgression to Improve the Success of Bayer Traited Products

Qinglin Duan and Bing Liu, Bayer Crop Science

At Bayer, we sell elite germplasm with important biotech traits added. The biotech traits are transferred into conventionally bred lines through a process called Trait Introgression (TI). To maintain a competitive advantage, the introgressed lines need to be ready at the same time as the conventional lines so that we can bring yield gain to market as quickly as possible. One of the most important factors for finishing the TI process on time is to select the right parents – the parents need to be closely matched on various genotypic and phenotypic characteristics, as well as satisfying constraints including time & market needs.  Out of millions of possible combinations, the best possible subsets need to be selected to control cost. Using Operations Research methods, we were able to optimize the selection of parental lines while satisfying all of the constraints. This optimization strategy starts our TI process on the best terms possible, increases the success rate of on-time delivery, thus bringing millions of dollars of value from increased genetic gain. This is a great success story of applying operations research in world’s largest seed production pipeline, impacting the majority of our traited products.

Sensing Anomalies from Big Transaction Data

Aihong Wen, Walmart Labs

As the world’s Fortune #1 company, Walmart teams are obsessed with finding ways to help our customers live better by leveraging the power of data science. In this talk, we will showcase one example where we are implementing multiple machine learning techniques to detect and alert anomalies that could relate to millions of revenues from the huge amount of complicated transaction data. 

Data Science Nourishes Customer Experience at Schnucks

Yujin Lee and Saraiya Kalu, Schnucks

 

Coming soon!
 

Data Science: Cultivating A Process Improvement Culture

Katrina Drinnon, Lexicon

 

Often, as data scientists, we are working with a wide variety of teams and individuals (e.g. subject matter experts, software developers, database admins, data architects, etc.) who each have their own definitions of standardized processes or lack thereof. It’s easy to get lost in limbo in the middle of it all, especially if you are a small or solo team and/or the new kid on the block. We get busy fixing reports, answering the executive team’s most recent hot question, or building the next cool algorithm. As a result, we rush through the business understanding and data exploration phases of our data science research; documentation becomes tomorrow’s problem; and we jump from one project to the next without taking the time to reflect on our processes. But as data scientists, we sit in a unique place in our organizations where we can not only improve our own processes, but also improve the data we analyze by helping improve the processes of the clients we serve. So what does process improvement look like in a data science team? How can we cultivate a culture of process improvement within our organizations?

CONTACT US