20191114 Thoughts

Today passed by so quickly. It was not until a few moments ago that I had the time to start writing. Right now Pandora internet radio is fully booted up and I’m listening to some music and beginning to sit down and write. That is good news at the end of a very long day. Sure I’m already thinking about dinner tomorrow, but that should not stop me from a little bit of deep thought and maybe a few moments of reflection. Some of those thoughts tonight need to be spent on the next iteration of my applied machine learning roadshow. Maybe working out the mechanics of how to run it from end to end is a worthwhile approach to explaining the full picture. Anyway — I’m going to spend a little bit more time digging into expanding on my initial 5,000 word treatment of the concept. Maybe take one good shot at it a day is the right way to move things forward on that front.

One of the things that I need to do as we approach 2020 is work out the three paper topics I want to start working on next year. Maybe really digging into modern approaches to survey techniques might be a place to start. I’m pretty sure that the modern methodology for gathering preferences is breaking down. You can always pay for focus groups, but random calling via telephone or sending out written survey cards is really becoming problematic. People are fatigued and generally not willing to be respondents. I hung up on a survey the other day for a new reason. I wondered if they were just collecting preference data to help them sell me things vs. doing some type of legitimate polling exercise. Pollsters are not registered by the state like lawyers or medical professionals.

Well that line of thought took off in a few directions. Most of them revolved around how to setup a new polling entity and what would be the general cost and consequence of doing that sort of thing. One of the things that I started to build out the other day and stopped was a real time sentiment extraction tool for sampling the news and social media. It seems like a ton of those types of tools have been built by stock traders and other groups of people looking to better understand the direction of the public mind in near real time. Even the idea of taking on that type of thing is a relatively interesting endeavor to consider.

20191113 Thoughts

Yesterday, I was in New York City for the first time in years. Visiting Times Square this week was an interesting experience to say the least. New York City is a bustling place. People are everywhere and moving around quickly. I had intended to take notes throughout my travels, but things got too busy. Attending an all day conference takes a certain amount of energy. A solid writing session also takes a good amount of energy putting those two things solidly in conflict. During the Ai4 Healthcare conference I did not have the time to really sit down my Google Pixelbook Go and write. Things happened very quickly and the speaking sessions were pretty short. That combination of things meant I spent a lot more time listening and talking to people than writing. That worked out well enough and now things are back to normal.

Every year I take the time to attend somewhere between 3 and 5 conferences. That has been my academic routine for a very long time. The conferences have changed over the years, but the general experience of learning always remains relatively the same. You experience new things and figure out where to go next. Within that purpose of striving forward to a better next the meaning of life, the universe, and pretty much everything is probably lurking. Heavy and deep thoughts for so early on Wednesday, but I have already had four shots of espresso and things are starting to move along.

My Google Pixelbook Go arrived on October 29, 2019 and I have been using it ever since as my daily writing platform. Overall, the device has been really solid and done everything I needed it to do without any problems. My only criticism of the device is the whole thing is super prone to showing fingerprints. I’m cleaning the whole thing gently every five days or so to keep it looking pristine. Whoever tested the coating material for fingerprints just did not get the device out into the real world enough. I feel like this is something people would have noticed pretty quickly during the course of daily use. My Dell does does not have this problem and the ASUS Chromebook Flip never showed any fingerprints on the aluminum case.

Ai4 Healthcare NYC 2019

Tomorrow is the big day and I thought it would be a good idea to write out my speech instead of delivering it via spoken word. Sitting at the airport it seemed like a better idea than randomly talking to the fifty people sitting near me. I figured writing is comfortable and I could sit down and write the speech out within thirty minutes and have a good pass at what was going to be said. Every major speech I have given is broken into chunks. I have an outline and topics to address during the speech. The exact text changes every time and that is a combination of my learning as I go and my inability to repeat myself exactly. Reading from a teleprompter would be one way to go about it, but that is a boring way to talk to people. For me that lacks authenticity and it does not in any way reflect an interaction with the audience. It was the same with or without a crowd. It is always the same. Nothing ever changes on the teleprompter. The following words are my first real cut at trying to write out the talk on machine learning that I’m delivering these days to anybody who will listen.

——-

Dr. Nels Lindahl here. I’m a director of clinical decision systems at a fortune 10 company. Today we are going to talk about figuring out applied machine learning: Building frameworks and teams to operationalize machine learning at scale. We are going on a journey for the next 30 minutes. We will go from the start of the process to the end of the process.

That being said… Truly thinking about machine learning holistically is the hard part of threading the needle on this topic.

#1 Where does the talent come from?

Personally, I believe you can build the talent from within your organization. I made a mental note of your reactions to that claim. Throughout my career I have a proven track record of helping grow internal talent. One of my proudest accomplishments is seeing somebody get promoted. Go out and start building out the toolkits of the people that you have. Really take the time to invest in them growing and developing as teammates and as individual contributors.

We are in the golden age of learning about machine learning. More training than you can possibly consome now exists online. It exists in a variety of different forms. One of my favorites is the online labs that are now available online. The one I have used the most is called Coursera. People have actually built out well tooled examples of how to do machine learning. Not only can you read about it, but you can get into examples and kick the tires. That is the thing that has drawn me to TensorFlow since the product launched. So many people have been so generous with their knowledge, skills, and abilities. They are sharing the keys to the machine learning kingdom online in some pretty easy to access classes, lectures, and even a few certificates. I have taken well over fifty courses you can see them on my LinkedIn profile. That will show you which ones I invested my own time in completing.

Sometimes building internal teams is just not fast enough. It takes time to help internal talent develop world class skills in machine learning or anything for that matter. I recognize that is a long term goal. That is where you have a few options to start looking for ways to supplement talent. One of those ways is to hire contractors and have them help you kickstart your endeavor. Another way is to find the right product or company to help you get going fast. A number of companies are doing that right now and some of them can be really impactful for your organization.

Typically the data sources in an organization are not well indexed with clearly mapped features and associations. Even getting off the shelf datasources is a real challenge. For the most part the ones that people use were created to be used that way. Those data sets did not occur naturally in the wild. Even making custom tailored synthetic datasets can be a challenge for an organization that is trying to operationalize ML at scale. That is where using external products to manage the data and even accessing APIs requires planning and sustained dedication. That means that data going to the APIs has to be consistent. Constantly changing data streams are a nightmare to manage internally or externally.

That might have been a lot to consider all in one stretch of thought, but it will all come into context the first time building a team to solve an ML problem becomes a necessity. My answer to where the talent comes from involves blending really great professionals together over time to create high functioning teams. That may involve hiring in key skill sets to help supplement a team or investing in training the team if enough ramp up time exists. The shorter the amount of ramp up time the greater the need to quickly bring in external talent.

#2 How do you get the talent to work together?

Now that we talked about where the talent comes from and how to think about investing in your teams. Let’s switch gears and talk about how to get the teams to work together. This is one of those things that is much easier to talk about than to manage in practice. You can think about the mantra let your leaders lead, let your managers manager, and let your employees succeed. That works well enough when you have agile teams that actually self organize and rapidly get work done. If that is where you are sitting right now, then congratulations and appreciated what you have.

Teams are really about how the different players work together. I try to think about machine learning engagements as having two key pillars. First, you need to figure out who has the deep knowledge on team of the product, data, and how the data relates to the customer journey. This is either going to be really obvious or really hard. Sometimes these folks with the greatest institutional knowledge of the data are key SMEs that play an impactful role or they could be buried deeper in the organization at an analyst role or maybe they moved to another role.

Second that person with deep knowledge and help them work with the machine learning expert you found. Pairing these two things together is going to be the most critical lynchpin to what you are doing. Most organizations do have data structures that were architected to work from the start in machine learning. Figuring out the right places to start. What data to label and what relates to what is really the beginning of the journey. This is one of the reasons why people with full stack machine learning skills are so important. What does that even mean? Full stack machine learning skills. I can walk into your organization and setup Tensorflow and even get the team sharing some Jupyter notebooks today. Having the right feeds, having the right machine learning hardware, having access to right production side infrastructure to swiftly move data without crushing or breaking things is where full stack skills are essential.

Maybe truly agile teams are supposed to be self-organizing, but that is probably not just going to happen the first time out the gate. Finding a common or shared purpose sounds a lot easier than it really is in practice. Getting people to self-organize around that common or shared purpose probably requires some type of ground rules or spark.

Sometimes high functioning teams just embrace the challenge and work to knock down any barriers or obstacles they might face. Most teams do not have that level of dedication, persistence, or fortitude. Typically the project need or just a general business problem brings a group together to take some type of action. Managing during those types of situations is always interesting and generally includes trying to bring people with diverse skill sets together.

That covers two types of teams you will encounter: high performing teams that are already assembled and teams that come together based on a specific business problem. Outside of those two common scenarios the other type of talent situation you will face might very well be a solution chasing problems. It happens now more than ever when the market is saturated with open source projects that let people jump in and start working with complex tools. The next step in that pattern is wanting to do something with that new and exciting tooling. To that end, you may find a solution just waiting for a problem to tackle. However, it might not be the right solution or even remotely close to the course of action that should be taken.

Getting talent to work together for me revolves around the business problem and what the team is trying to achieve. It is hard to rally around an end goal that is nebulous or otherwise probamaticly co-opted into something other than a resolution to the business problem in question.

We should probably jump in and spend a little bit of time on understanding the tooling necessary to allow the machine learning expert to work with the team in a productive way. You can probably tell by now that my preference is for using something robust like TensorFlow to dig in and start doing machine learning at scale. You could just start out with log files and dig in with an off the shelf product like the ML toolkit from Splunk. That is an example of a way to open the door for the team to start using a common platform to get things done. a

#3 What are these workflows and why do they matter?

Ok we talked about where the talent is going to come from and how to start thinking about getting the team to work together. The next questions should be related to the workflows that exist where machine learning could be used. I generally bucket the workflows into 4 distinct categories: streams of data, warehoused data, live transactions, and bulk/batch jobs that are running. Each one of these workflows requires a different type of machine learning approach. You might think is a stretch, but it is not. The type of effort to apply machine learning to a steaming set of data is much harder than working with static data. Just because you can spin up machine learning on a stream does not mean the model will be accurate or efficient.

Really solid trained production models that are fast and accurate are really valuable. Seriously, that is where you want to be at, but it does not happen on accident. In a streaming data scenario you have to have a method to train and work on models and a method to load operational models that happens without disrupting the flow of data.

Dealing with warehouses data is easier, but sometimes you forget about speed when you are not dealing with the stream. You can allow machine learning model to take a lot more time. For example the models uses for visual processing with an automobile are a lot more speed critical than the models used on a history of major league baseball statistics. This is where the team can really grow and develop around rather static data sets that are looking to evaluate.

Live transactions are one of the more fun parts of the process. The workflow involved in a transaction is normally very clear. A stream is more fluid and less pure pattern than a transaction based view. A video stream is always going to be more variable than a financial transaction on a website, but both have real opportunities for machine learning.

Ok we are really starting to get into some end to end thinking about how you are going to operational machine learning models at scale within your organization. You have started to think about your team. How the team is going to work together and some workflows that could be sources of data to do something with. The next step in our attack today is going to be looking at some very specific problems you could tackle with machine learning.

The patterns we need to make it work are everywhere. Now is the time to just embrace that complexity and the nature of how the digital world is interconnected creates a scenario where workflows exist all over the place. Workflows exist for all sorts of business process. Sometimes they are set up with good intentions or they just crystallize out of need. Bolting some type of ML on a workflow sounds like fun. It is one way of trying to use an advanced process to do something. It could be a recommendation system, anomaly detector, or even a pattern breaking adversarial setup. Things happen within workflows in definable and repeatable ways. That is pretty much the right recipe to jump in and work with some type of machine learning algorithm. Yeah, that might read like a solution chasing a problem and that very well could be the case or it could be an opportunity waiting to be discovered.

Breaking down potential workflow types into buckets would include content streams, mining static content, transaction based, and bulk/batch processes. Each of these buckets includes different workflow challenges in terms of ML implementations. Obviously, mining static content is the place that a lot of teams start. It is not a moving target and time is probably on your side to dig in and figure out exactly how things are going to work. You have plenty of training time and your model can be applied and tweaked. Perhaps the opposite of that bucket is trying to engage applied ML on content streams. Your model needs to be ready and the process has to move swiftly. Anything that happens within a stream has to be fluid enough to allow volume to continue to flow without creating backlogs or latency. The same type of argument holds true for transaction based workflows, but you get a little bit more headroom on a transaction basis to allow the model to complete work.

#4 What are problems you can solve with ML?

If your team is starting to dig into using machine learning models to do something then you will quickly began to think about recommendations, detection, sorting, and assistive. You will find that machine learning in the wild is going to be all about the data and the use case. Do you have a source of data that is dependable and reliable and does that data support the thing you want to do with it. Recently, it feels like a lot more assistive type ML algorithms are being built to help speed up workflows and reinforce processes in the workplace.

Making a recommendation is one problem that ML algorithms tackle pretty well. Within a workflow that involves purchasing things making recommendations is a useful thing to do and can be very powerful. Really dialling in recommendations to have them be targeted, insightful, and useful can make a really solid recommendation engine highly successful. Like anything else it has to be turned and maintained or diminishing returns will occur. You can only recommend the same thing so many times to the same user before it becomes ignored without effort.

A ton of detection systems exist. Some of the are related to computer vision within the automotive space and a lot of them are built around working with images. These are some of the most interesting use cases out in the wild right now.

Building out really awesome sorting machine learning algorithms always creates the possibility for fun. One of the best use cases for sorting machine learning has to be the reduction of unwanted emails. That type of effort has almost worked too well recently.

One of the really exciting use cases developing out in the wild right now happens to be related to assistive process and other forms of automation. It is amazing what can be built and utilized right now to help make things happen quicker and to take definable and repeatable processes and make them occur without effort.

#5 What exactly is an ML strategy?

Ok we talked talent, teams, and workflows and problems. Now it is time to dig into your ML strategy. I held off on this one until we had some foundational ground setup to walk here. Part of your machine learning strategy has to be about purpose, replication, and reuse. Machine learning is typically applied as part of a definable and repeatable process. That is how you get quality and speed. You have guardrails in place that keep things within the comines of what is possible for that model. Outside of that you have to be really clear on the purpose of using machine learning to do something for your organization. You could do it for purely employee engagement reals. The team really wants to do it and you let them make that happen. Sure they might figure out a novel way to use that energy and engagement to produce something that aligns to the general guiding purpose of the organization. Some of that is what strategy is about.

Apply your philosophy in a uniform way based on potential ROI. after you i know you are living things up for the right reasons. Then you can begin to think about replication of both the results and across as many applications as possible. Transfer learning really plays into this and you will learn real quick that after you figured out how to do it with quality and speed that applying that to a suite of things can happen much quicker. That is the power of your team coming together and being able to deliver results.

Seeing the strategy beyond trees in the random forest take a bit of perspective. Sometimes it is easier to lock in and focus on specific project and forget about how that project fits into a broader strategy. Having a targeted focused ML strategy that is applied from the top down can help ensure the right executive sponsorship and resources are focused on getting results. Instead of running a bunch of separate efforts that are self-incubating it might be better to have a definable and repeatable process to roll out and help ensure the same approach can be replicated in cost effective ways for the organization.

Maybe an example of a solid ML strategy might be related to a cost containment or cost saving program to help introduce assistive ML products to allow a workforce to do things quicker with fewer errors. Executing that strategy would require operationalizing it and collecting data on the processes in action to track measure and ensure positive outcomes.

#6 What do you mean by ML vectors?

Now that we have kicked the tires on machine learning strategy we need to really dig into the hard stuff. I know everything else that I have been talking about today has been leading up to this. We are really going to dig into machine learning vectors. This is the most technical part of machine learning we are going to talk about today and it can be a little hard to begin to attack. That is why we are going to start small and work our way up to the hard things. Using machine learning at scale means that you have figured out how to use the model within your technology stack and you know how and where to apply that model.

This is where vectors really come into play. If you have a stream of data and you need to call an API to an external product that verifies the images that is an important part of the vector. In this case your vector is an external approach point that allows that API to probably jump outside of your workflow and to apply a machine learning model. At this point, the model executes and the image is identified and data is passed back hopefully that is happening near real time and if you are having to send over huge images to the external source and you have transport latency and model work time to account for. Your lighting fast stream might be slowly down rapidly.

If your vector had been internal and was lightweight enough to run as the image is stored or even process in memory as the stream is occuring you have a lot faster speeds to usable information.

Ok — let’s try to explain this in a different way. Data arrives in a lot of different ways. The method of transport and where the data is being transported can form a vector. In this very specific use case of the word vector, within the use case I am trying to describe how ML would be applied to some type of incoming data. This could be during an API exchange, it could be an API call within a process, on an internal cloud using a trained model, reaching across to an external cloud, on premise within your datacenter, or even within an edge computing instance. At the edge, you will need to make sure your model is well trained and deployed in a lightweight way to help drive your use case without adding a bunch of computing time within what should be your fastest use case.

Figuring out the right way to apply an ML algorithm requires planning. Knowing how the application will work and where it will be is really important. Knowing the vector makes a big difference on the flow of data and what can be done. In my mind that makes it foundational to thinking about how to design and deploy an ML model in production. Dealing with streaming data makes the directionality of the vector even more important.

#7 What is a compendium of key performance indicators?

Based on time we can only a few moments on the compendium of KPIs and how important that is to the overall execution of your machine learning strategy. You have to be able to present the results of machine learning within the organization in terms of return on investment and other key drivers that show criticality to the business. Honestly, you have to be able to think about your machine learning dashboard as something that is so simple to consume that in 30 seconds the overall status is consumable and intuitive to understand. That is good reporting. It takes time and a lot of hard work to pull off, but it is really impactful when you do get to this point in the execution. I personally begin with the end in mind as part of my strategy and while people are executing i work to figure out how to compile the measures and how to tell the story . sometimes the teams that are directly involved in the word do not have the time to dig in and dashboard as they are building and turning models. Getting the data sorta along is major milestone in your machine learning journey.

Everything about figuring out how to use ML in your organization has to come down to a strategic plan. You need to figure out how to visualize that strategic plan in action and that will probably involve having a compendium of key performance indicators that show the value within delivering your ML solution. Beginning with that outcome in mind is important. It helps to frame the entire solution being operationalized. At a strategic level, the things being operationalized by teams have to contribute to specific budget-based outcomes. That inherently connects expenditure to benefit throughout the organization in a measurable way.

From ideation to being operationalized ML benefits must be mapped to a KPI. All of those things wrapped up together in a compendium helps present a picture of what strategic planning is contributing to specific action and all of that gets defined in the form of clear and understandable results. Bringing those examples together within a compendium is the right way to ensure the organization has a clear view of what is going on. Understanding how the elements of the compendium of KPIs work together is what helps people visual the strategic plan the organization is executing. It paints a very clear picture of the strategy and if that strategy is being operationalized correctly.

#8 What are some examples of ML turning the wheel?

Processes abound in the workplace. Some sets of processes come together to form a workflow. Within that workflow it might be possible for ML to help engage in a turn of the wheel. That specific application of ML could help push things forward, nudge things along, or even keep the train on the tracks if that is the desired outcome. Some of the ways I have seen ML turning the wheel include recommendations, detection, sorting, and assistive deployments. Every one of those possible turns can help add value and make ML part of ongoing strategic planning.

In the healthcare space, we are seeing more and more really valuable diagnostic tools starting to show up. Models are being deployed that engage in complex detection from images. Some of those models have been highly accurate in identifying skin cancer or even checking for potential signs of atrial fibrillation. Each one of those advances helps turn the wheel just a little bit allowing automation of tasking to help push things forward. Even if these advances in the healthcare space turn out to only be assistive for physicians they are still major contributions.

20191108 Thoughts

Returning to that exact moment of extreme serenity that happened years ago takes a certain degree of something. Perhaps that something is the essence of tranquility or so it goes… my Nespresso Expert machine just produced two shots of espresso today. Mentally we have to take one step at a time to climb the mountain toward that extreme goal that was established long before today. Every step has to drive things forward. We have to strive forward to achieving that goal. Within that whirlwind of striving and driving a moment of serenity would echo passed pressure and limitations. That is where my thoughts are right now. They are somewhere between forever and now limited by something existential.

Using the Surface Mobile Mouse

Just the other day (last week), I ordered a Surface Mobile Mouse in Cobalt blue from the Microsoft online store. It is a low profile bluetooth mouse that is currently connected to my Google Pixelbook Go. Setup was super easy and involved two batteries and pairing the device. All of that took about thirty seconds. Using the Surface Mobile Mouse has been really pain free and has worked really well. My only criticism of the mouse so far is that the bottom is pretty slick to the point where if the desk is at a decent angle the mouse just falls backward.