Blog

Time to Drop LastPass? The Most Irresponsible, Idiotic Post Ever!

<rant>

I have not blogged for a while, but I feel my blood is boiling at such an irresponsible call by MakeUseOf, a site I used to respect:

It’s Time to Drop LastPass: How to Migrate to a Better Password Manager

As a long-time LastPass user, my first reaction was panic: yet another breach?  But no such thing.  This article simply goes on providing step-by-step instructions  on how to switch from LastPass to  1PasswordDashlane, or KeePass, should you so wish.  But why?  Who knows.  I can’t find any reference to a new LastPass breach, bug, or any issue not already known/remedied a long time ago.

But why?  Who knows.  I can’t find any reference to a new LastPass breach, bug, or any issue not already known/remedied a long time ago.   What I do know is that the safety, integrity of the central password manager is critical for all of us, and moving to another provider is …well, kind of a Big Deal.

Which is why such crap calls are utterly irresponsible.  I lost all respect for MakeUseOf.

</rant>  (and I am feeling better now)

🙂

​Artificial intelligence and privacy engineering: Why it matters NOW

As artificial intelligence proliferates, companies and governments are aggregating enormous data sets to feed their AI initiatives.

Although privacy is not a new concept in computing, the growth of aggregated data magnifies privacy challenges and leads to extreme ethical risks such as unintentionally building biased AI systems, among many others.

Privacy and artificial intelligence are both complex topics. There are no easy or simple answers because solutions lie at the shifting and conflicted intersection of technology, commercial profit, public policy, and even individual and cultural attitudes.

Given this complexity, I invited two brilliant people to share their thoughts in a CXOTALK conversation on privacy and AI. Watch the video embedded above to participate in the entire discussion, which was Episode 229 of CXOTALK.

Michelle Dennedy is the Chief Privacy Officer at Cisco. She is an attorney, author of the book The Privacy Engineer’s Manifesto, and one of the world’s most respected experts on privacy engineering.

David Bray is Chief Ventures Officer at the National Geospatial-Intelligence Agency. Previously, he was an Eisenhower Fellow and Chief Information Officer at the Federal Communications Commission. David is one of the foremost change agents in the US federal government.

Here are edited excerpts from the conversation. You can read the entire transcript at the CXOTALK site.

What is privacy engineering?

Michelle Dennedy: Privacy by Design is a policy concept that was hanging around for ten years in the networks and coming out of Ontario, Canada with a woman named Ann Cavoukian, who was the commissioner at the time of Ontario.

But in 2010, we introduced the concept at the Data Commissioner’s Conference in Jerusalem, and over 120 different countries agreed we should contemplate privacy in the build, in the design. That means not just the technical tools you buy and consume, [but] how you operationalize, how you run your business; how you organize around your business.

And, getting down to business on my side of the world, privacy engineering is using the techniques of the technical, the social, the procedural, the training tools that we have available, and in the most basic sense of engineering to say, “What are the routinized systems? What are the frameworks? What are the techniques that we use to mobilize privacy-enhancing technologies that exist today, and look across the processing lifecycle to build in and solve for privacy challenges?”

And I’ll double-click on the word “privacy.” Privacy, in the functional sense, is the authorized processing of personally-identifiable data using fair, moral, legal, and ethical standards. So, we bring down each one of those things and say, “What are the functionalized tools that we can use to promote that whole panoply and complicated movement of personally-identifiable information across networks with all of these other factors built in?” [It’s] if I can change the fabric down here, and our teams can build this in and make it as routinized and invisible, then the rest of the world can work on the more nuanced layers that are also difficult and challenging.

Where does privacy intersect with AI?

David Bray: What Michelle said about building beyond and thinking about networks gets to where we’re at today, now in 2017. It’s not just about individual machines making correlations; it’s about different data feeds streaming in from different networks where you might make a correlation that the individual has not given consent to with […] personally identifiable information.

For AI, it is just sort of the next layer of that. We’ve gone from individual machines, networks, to now we have something that is looking for patterns at an unprecedented capability, that at the end of the day, it still goes back to what is coming from what the individual has given consent to? What is being handed off by those machines? What are those data streams?

One of the things I learned when I was in Australia as well as in Taiwan as an Eisenhower Fellow; it’s a question about, “What can we do to separate this setting of our privacy permissions and what we want to be done with our data, from where the data is stored?” Because right now, we have this more simplistic model of, “We co-locate on the same platform,” and then maybe you get an end-user agreement that’s thirty or forty pages long, and you don’t read it. Either accept, or you don’t accept; if you don’t accept, you won’t get the service, and there’s no opportunity to say, “I’m willing to have it used in this context, but not these contexts.” And I think that means Ai is going to raise questions about the context of when we need to start using these data streams.

How does “context” fit into this?

Michelle Dennedy: We wrote a book a couple of years ago called “The Privacy Engineer’s Manifesto,” and in the manifesto, the techniques that we used are based on really foundational computer science.

Before we called it “computer science” we used to call it “statistics and math.” But even thinking about geometric proof, nothing happens without context. And so, the thought that you have one tool that is appropriate for everything has simply never worked in engineering. You wouldn’t build a bridge with just nails and not use hammers. You wouldn’t think about putting something in the jungle that was built the same way as a structure that you would build in Arizona.

So, thinking about use-cases and contexts with human data, and creating human experiences, is everything. And it makes a lot of sense. If you think about how we’re regulated primarily in the U.S., we’ll leave the bankers off for a moment because they’re different agencies, but the Federal Communications Commission, the Federal Trade Commission; so, we’re thinking about commercial interests; we’re thinking about communication. And communication is wildly imperfect why? Because it’s humans doing all the communicating!

So, any time you talk about something that is as human and humane as processing information that impacts the lives and cultures and commerce of people, you’re going to have to really over-rotate on context. That doesn’t mean everyone gets a specialty thing, but it doesn’t mean that everyone gets a car in any color that they want so long as it’s black.

David Bray: And I want to amplify what Michelle is saying. When I arrived at the FCC in late 2013, we were paying for people to volunteer what their broadband speeds were in certain, select areas because we wanted to see that they were getting the broadband speed that they were promised. And that cost the government money, and it took a lot of work, and so we effectively wanted to roll up an app that could allow people to crowdsource and if they wanted to, see what their score was and share it voluntarily with the FCC. Recognizing that if I stood up and said, “Hi! I’m with the U.S. government! Would you like to have an app […] for your broadband connection?” Maybe not that successful.

But using the principles that you said about privacy engineering and privacy design, one, we made the app open source so people could look at the code. Two, we made it so that, when we designed the code, it didn’t capture your IP address, and it didn’t know who you were in a five-mile-radius. So, it gave some fuzziness to your actual, specific location, but it was still good enough for informing whether or not broadband speed is as desired.

And once we did that; also, our terms and conditions were only two pages long; which, again, we dropped the gauntlet and said, “When was the last time you agreed to anything on the internet that was only two pages long?” Rolling that out, as a result, ended up being the fourth most-downloaded app behind Google Chrome because there were people that looked at the code and said, “Yea, verily, they have privacy by design.”

And so, I think that this principle of privacy by design is making the recognition that one, it’s not just encryption but then two, it’s not just the legalese. Can you show something that gives people trust; that what you’re doing with their data is explicitly what they have given consent to? That, to me, is what’s needed for AI [which] is, can we do that same thing which shows you what’s being done with your data, and gives you an opportunity to weigh in on whether you want it or not?

Does AI require a new level of information security?

David Bray: So, I’ll give the simple answer which is “Yes.” And now I’ll go beyond that.

So, shifting back to first what Michelle said, I think it is great to unpack that AI is many different things. It’s not a monolithic thing, and it’s worth deciding are we talking about simply machine learning at speed? Are we talking about neural networks? This matters because five years ago, ten years ago, fifteen years ago, the sheer amount of data that was available to you was nowhere near what it is right now, and let alone what it will be in five years.

If we’re right now at about 20 billion networked devices on the face of the planet relative to 7.3 billion human beings, estimates are at between 75 and 300 billion devices in less than five years. And so, I think we’re beginning to have these heightened concerns about ethics and the security of data. To Scott’s question: because it’s just simply we are instrumenting ourselves, we are instrumenting our cars, our bodies, our homes, and this raises huge amounts of questions about what the machines might make of this data stream. It’s also just the sheer processing capability. I mean, the ability to do petaflops and now exaflops and beyond, I mean, that was just not present ten years ago.

So, with that said, the question of security. It’s security, but also we may need a new word. I heard in Scandinavia, they talk about integrity and being integral. It’s really about the integrity of that data: Have you given consent to having it used for a particular purpose? So, I think AI could play a role in making sense of whether data is processed securely.

Because the whole challenge is right now, for most of the processing we have to decrypt it at some point to start to make sense of it and re-encrypt it again. But also, is it being treated with integrity and integral to the individual? Has the individual given consent?

And so, one of the things raised when I was in conversations in Taiwan is the question, “Well, couldn’t we simply have an open-source AI, where we give our permission and our consent to the AI to have our data be used for certain purposes?” For example, it might say, “Okay, well I understand you have a data set served with this platform, this other platform over here, and this platform over here. Are you willing to have that data be brought together to improve your housekeeping?” And you might say “no.” He says, “Okay. But would you be willing to do it if your heart rate drops below a certain level and you’re in a car accident?” And you might say “yes.”

And so, the only way I think we could ever possibly do context is not going down a series of checklists and trying to check all possible scenarios. It is going to have to be a machine that can talk to us and have conversations about what we do and do now want to have done with our data.

What about the risks of creating bias in AI?

Michelle Dennedy: Madeleine Clare Elish wrote a paper called “Moral Crumple Zones,” and I just love even the visual of it. If you think about cars and what we know about humans driving cars, they smash into each other in certain known ways. And the way that we’ve gotten better and lowered fatalities of known car crashes is using physics and geometry to design a cavity in various parts of the car where there’s nothing there that’s going to explode or catch fire, etc. as an impact crumple zone. So all the force and the energy goes away from the passenger and into the physical crumple zone of the car.

Madeleine is working on exactly what we’re talking about. We don’t know when it’s unconscious or unintentional bias because it’s unconscious or unintentional bias. But, we can design-in ethical crumple zones, where we’re having things like testing for feeding, just like we do with sandboxing or we do with dummy data before we go live in other types of IT systems. We can decide to use AI technology and add in known issues for retraining that database.

I’ll give you Watson as an example. Watson isn’t a thing. Watson is a brand. The way that the Watson computer beat Jeopardy contestants is by learning Wikipedia. So, by processing mass quantities of stated data, you know, given whatever levels of authenticity that pattern on.

What Watson cannot do is selectively forget. So, your brain and your neural network are better at forgetting data and ignoring data than it is for processing data. We’re trying to make our computer simulate a brain, except that brains are good at forgetting. AI is not good at that, yet. So, you can put the tax code, which would fill three ballrooms if you print it out on paper. You can feed it into an AI type of dataset, and you can train it in what are the known amounts of money someone should pay in a given context?

What you can’t do, and what I think would be fascinating if we did do, is if we could wrangle the data of all the cheaters. What are the most common cheats? How do we cheat? And we know the ones that get caught, but more importantly, how do […] get caught? That’s the stuff where I think you need to design in a moral and ethical crumple zone and say, “How do people actively use systems?”

The concept of the ghost in the machine: how do machines that are well-trained with data over time experience degradation? Either they’re not pulling from datasets because the equipment is simply … You know, they’re not reading tape drives anymore, or it’s not being fed from fresh data, or we’re not deleting old data. There are a lot of different techniques here that I think have yet to be deployed at scale that I think we need to consider before we’re overly relying [on AI], without human checks and balances, and processed checks and balances.

How do we solve this bias problem?

David Bray: I think it’s going to have to be a staged approach. As a starting point, you almost need to have the equivalent of a human ombudsman – a series of people looking at what the machine is doing relative to the data that was fed in.

And you can do this in multiple contexts. It could just be internal to the company, and it’s just making sure that what the machine is being fed is not leading it to decisions that are atrocious or erroneous.

Or, if you want to gain public trust, share some of the data, and share some of the outcomes but abstract anything that’s associated with any one individual and just say, “These types of people applied for loans. These types of loans were awarded,” so can make sure that the machine is not hinging on some bias that we don’t know about.

Longer-term, though, you’ve got to write that ombudsman. We need to be able to engineer an AI to serve as an ombudsman for the AI itself.

So really, what I’d see is not just AI as just one, monolithic system, it may be one that’s making the decisions, and then another that’s serving as the Jiminy Cricket that says, “This doesn’t make sense. These people are cheating,” and it’s pointing out those flaws in the system as well. So, we need the equivalent of a Jiminy Cricket for AI.

CXOTALK brings you the world’s most innovative business leaders, authors, and analysts for in-depth discussion unavailable anywhere else. Enjoy all our episodes and download the podcast from iTunes and Spreaker.

(Cross-posted @ ZDNet | Beyond IT Failure Blog)

Augmented reality: Field service proof points in the enterprise

Augmented reality

Image supplied by Oracle

Although we read about augmented reality in the popular press, the focus tends toward consumer expereinces like Pokemon Go and Snapchat. Although the consumer side of AR is huge, there are important applications in the enterprise.

Because the conference was in Las Vegas, the demo showed a field service technician using AR on a mobile phone to repair a broken slot machine. The demo is instantly compelling because of the visuals and shows a practical enterprise use case for augmented reality.

A presentation at Oracle’s recent Modern Customer Experience conference demonstrated augmented reality applied to field service management. It’s one example where the value of AR is obvious and dramatic.

Although the field service management industry has been innovating around knowledge delivery to technicians for decades, powerful mobile hardware combined with ubiquitous connectivity and AR software changes the game.

ALSO READ: Augmented reality: An enterprise business imperative

To gain an in-depth view of how AR is changing field service, I put questions to Shon Wedde, Oracle’s Senior Director of CX Product Management, and Joshua Bowcott, Product Manager, Oracle Service Cloud. They also captured the sequence of screens in the gallery embedded above.

When is augmented reality most suitable for field service?

Customers adopt AR for various field service applications across all industries. Traditionally, AR emerged where massive pieces of equipment were used — like in oil and gas — as well as in M2M (machine-to-machine) situations, along with factory assembly lines. Companies selling complex and connected equipment across industries like manufacturing, medical, and automotive industries, have realized the importance adopting AR for field service.

Augmented reality is most suitable when it involves connected, complex equipment in a data-rich environment.

The concept of AR has been around for years. What’s new is our ability to take IoT and customer service technologies, such as policy automation and workflow, and integrate them into an AR scenario. Policy automation guides dynamic animation, and IoT data provide real-time feedback, creating a rich environment for AR and field service technicians to work.

We should also note that AR applications go beyond field service scenarios-enriching not only B2B and B2C interactions, but also internal company training, self-service, and assisted-service experiences as well. We explain those in more detail below.

What type of equipment does the field service technician need?

A field service technician can use any mobile device, including cell phones, tablets, goggles, etc.

How does the equipment vendor create the augmented reality content used by field service technicians?

AR content relies on information that already exists. Companies like PTC ThingWorx utilize existing product CAD drawings, scaling them to match real life animation with PTC software. A field service technician will pull existing information from the contact center’s knowledge, base, as they do today.

For example, an AR-equipped mobile device can “point” at a connected piece of equipment, such as a slot machine, and determine its make and model. The slot machine’s problems are also transferred via IoT data. The system uses this information to filter the contact center’s existing knowledge base for articles that pertain to this particular instance, eliminating the technician’s need to manually figure out the machine’s make, model, and where the problems originate.

From there, service solutions like Policy Automation guide a technician step-by-step with animation, to resolve the slot machine’s issue. If a replacement part is needed, a technician can use integrated commerce functionalities to order that part, specific to the machine’s make and model. Finally, the entire experience is captured and logged alongside the customer’s profile and history with the device and the company.

We should note that the Oracle Policy Automation interview and the user’s answers dictate which AR experience is loaded. [The gallery embedded above] only shows one path [of many possibilities.]

What are the primary applications today for augmented reality in field service?

Companies across all industries are using AR, especially in those maintaining assets. They are B2B as well as B2C companies. Oracle has seen use cases from wind farms, control systems, and medical equipment to household appliances and motorcycle manufacturers.

AR applications extend far beyond field service. There’s a massive shift underway as AR emerges as a new consumer user interface. Facebook is now delivering AR as part of its core platform and we see mainstream AR technology in apps like Snapchat, for example.

Consumers are becoming increasingly comfortable with AR as a new self-service channel. For example, a consumer wouldn’t call a field service technician to their house to fix a coffee machine. Instead, AR would walk the consumer through steps on his or her mobile device or tablet to diagnose the coffee machine issue and then either change a coffee filter or click to buy a new coffee filter. This AR scenario isn’t designed to enable an agent. Instead, it’s bringing the consumer directly into the self-service experience. He or she can interact with an agent via chat or video chat right on the device, as an assisted-service experience if needed.

Furthermore, combining AR with other Oracle technologies enables businesses to service equipment faster, without the need to dispatch a technician or expert, allowing for quicker resolution. One expert technician can capture an entire installation or service experience with AR and shared as a virtual reality training program, available to a company’s employees anytime and anywhere in the world.

Disclosure: Oracle is a consulting client

(Cross-posted @ ZDNet | Beyond IT Failure Blog)

Digital survival: The transformation imperative

 

Many discussions of digital transformation focus on marketing metrics such as page views, clicks, and even mentions in the press. Although marketing is important, this limited and unsophisticated view ignores underlying issues such as improving business speed, agility, and responsiveness to customers.

Only by rethinking business models can we meaningfully gain benefit from digital transformation in departments across the company including operations, supply chain, manufacturing, and customer service. Even finance and accounting should evolve as an organization responds to the changing expectations of modern consumers.

Genuine digital transformation – not the veneer of marketing, but the real thing – has become a business imperative. Over 50 percent of the Fortune 500 have disappeared in the last 15 years or so years, demonstrating the need for organizations to evolve.

Even when companies commit to a program of digital transformation, the results are often disappointing. Despite large investments in digital transformation, 59 percent of respondents in one survey companies reported that “digital transformation has not delivered high business impact at their organization.”

Against this backdrop of market confusion, I invited a digital transformation expert, practitioner, and author to participate as a guest on episode 208 of CXOTALK, a series of conversations with the world’s leading innovators.

Anurag Harsh is a founding executive of Ziff Davis (no relation to ZDNet), one of the largest publishers of technology content in the world. He is a prolific author (of multiple books) and puts theory into practice as a senior executive at Ziff Davis.

The conversation with Anurag Harsh spans the practical to the philosophical; the cultural and psychological to the technological. Among the wide-ranging topics, we discuss strategies for digital transformation, each targeting a specific business situation. From the structural swivel to the inverse acquisition, Anurag offers a prescription for many digital transformation situations.

Watch the video conversation embedded above, but also read the complete transcript of our talk. Here are edited excerpts and highlights:

Why is digital transformation so important?

In the last fifteen or sixteen years, more than half of all Fortune 500 companies have become insolvent, been acquired by another company, or stopped doing business altogether. And if you just look at last year, 50% of Fortune 500 companies declared a loss. So, the stride of transformation has become a revolution. Rivalries have deepened, and business models have been dislocated. The only constant is the growing severity of digital disruption.

Because of disruption, there’s despondency, and that’s compelling companies to want these digital initiatives. And they are investing a lot of money, which mostly results in disappointment due to the absence of concrete strategy. As markets shift downward, many companies try to counter the spiral by initiating frantic investments and digital initiatives. Some of them are hiring Chief Digital Officers, and some of them are looking at their CIOs and CMOs to counter these disruptive effects.

Describe your model for digital transformation?

There are five things that companies need to think about. These are terms I use a:

  • Structural swivel
  • Inverse acquisition
  • Offshoot
  • Coattail rider
  • Oiling the hinges

I will describe the first three now.

Structural swivel. If you talk to any CTO or CIO they have all legacy systems and techniques that can impede their ability to execute. By altering the company’s configuration to spotlight digital initiatives, executives can swiftly escalate the speed of transformation. It’s tactic that requires earmarking funds, and human resources to digital initiatives and placing digital executives in command of existing business processes.

For example, a local bank has started to swivel actively. Remember, this is the structural swivel we’re talking about.

It swiveled out of a conventional branch-driven model by venturing outside to recruit a CDO (Chief Digital Officer). The bank empowered this guy with complete corporate supervision, comprising all branches that were still the lion’s share of the bank’s income. All product, tech, sales outlets, and marketing units started reporting to the new CDO. To push for digital transformation, each regional division also hired a committed CDO at the same level as the local bank president. These changes were intended to assist the bank in obviously speeding and hastening its conversion to a soup-to-nuts digital enterprise and organizing a purely digital experience across all the business channels. That’s what I call a structural swivel.

Inverse acquisition. There are a lot of businesses that have unearthed quick wins ─ quick triumphs ─ by placing boundaries on the digital products so they can function autonomously and uninhibited by traditional processes. Just put them in a corner somewhere. It’s like, “Off you guys go!”

However, the moment a digital project shows its usefulness, shouldn’t subsequent tasks follow suit? Persevering or preserving the project’s autonomy restricts its influence on other businesses. So, one possibility is to absorb the traditional businesses into the new digital unit, spreading the transformation business-wide, and then compelling the rest of the company to abandon its archaic approaches. This is what I call “the inverse acquisition. ”

This tactic requires hard work. It comprises the comprehensive moving and resettlement of technology manifestos, company structures, and processes, and ultimately consumers from the traditional business to the new model. You must be cautious to ensure the company doesn’t collapse into disorder during the changeover.

I’ll give you an example. The British retail store, John Lewis, acquired buy.com.uk in 2001. It inherited vital technology and talent that it used to erect its only e-commerce business quickly. John Lewis later commenced a gigantic undertaking to reconstruct its web and e-commerce framework, which involved assimilating over 30 existing tech systems. And then they launched an e-commerce site around 2013, fully connected with their supply chain, delivery conduits, and the physical stores. The 10-year long dedicated effort increased its online sales by close to 30%. So, inverse acquisition. That works.

Offshoot. It’s unrealistic always to expect a new digital operation to absorb the traditional business, especially if the digital business is not yet developed sufficiently to absorb a larger unit. Also, it may focus on too dissimilar a fragment of the value chain.

In these cases, you can grow those ventures by segmenting the separate fragments into distinct businesses that can develop outside the principal line of business.

There’s an example here as well. BBVA Compass, a Spanish bank, had a software development division called Globalnet that they used to fuel their technology initiatives for over a decade. A few years ago, Globalnet, this little software development division, transformed into a company called BEEVA, which is an offshoot for creating and marketing business web services.

Although BEEVA powered the base technology for BBBA ─ the Spanish bank’s transition into digital banking ─ bank executives understood the software division’s innate potential. As an independent services business, BEEVA helps other banks do what BBBA has done using BEEVA’s groundbreaking cloud technology platform.

In this instance, a structural swivel or inverse acquisition would not have worked. Why? Because the bank was ultimately a financial services company and its software division BEEVA was a web services unit with functionality that was different from the bank’s core business.

So, that’s what I call “offshoot.”

What is authenticity in digital transformation?

Consumers expect authenticity from corporations and individuals alike. As consumer psychology changes, digital and marketing must enter a new era where human needs — values and connections — define success and failure. This is a call to action for marketers and advertising executives to change their perspective towards consumers.

Companies can no longer see consumers as gullible moneybags or conquests. They must see consumers as community members, as human beings, who crave trust.

You see the theme here? We’re talking about technology and digital, but what I’m getting at is connection. Consumers crave trust: predictability, transparency, respect. I call it the relationship era.

Your corporates value must resonate at every level of infrastructure. It has to emanate outwards to the company’s employees, customers, suppliers, stakeholders, neighbors, and even your relationship towards Earth! Merely projecting an image is akin to falsity. Companies must steadfastly practice what they preach.

The public today cares not only about the cost and quality of products and services. People also care about the values and conduct of the providers. Trust, reliability, ethics often supersede quality and affordability.

Thank you to my colleague, Lisbeth Shaw, for assistance with this post.

CXOTALK brings you the world’s most innovative business leaders, authors, and analysts for in-depth discussion unavailable anywhere else. Enjoy all our episodes.

(Cross-posted @ ZDNet | Beyond IT Failure Blog)

McKinsey: AI, jobs, and workforce automation

 

For business people, AI presents a variety of challenges. On a technology level, artificial intelligence and machine learning is complicated to develop and demands rich data sets to produce meaningful results. From a business perspective, many business leaders have difficulty figuring out where to apply AI and even how to start the machine intelligence journey.

Making matters worse, the constant drumbeat of AI hype from every technology vendor has created a continual barrage of noise confuses the market about the real possibilities of AI.

To cut through this noise, I have invited many world-leading practitioners to share their expertise as part of the CXOTALK series of conversations with innovators.

For episode 219 of CXOTALK, I spoke with Michael Chui, a Principal at the McKinsey Global Institute (MGI), and David Bray, an Eisenhower Fellow who is also CIO at the Federal Communications Commission.

The McKinsey Global Institute has released a variety of research reports on topics related to Ai, automation, and jobs. For example, see this article on the fundamentals of workplace automation.

As you can see in the graphic below, Chui and his team examined a variety of industries looking at the impact of automation, including AI, on the workforce.

CXOTALK McKinsey - automation and AI

Image from McKinsey Global Institute

Another fascinating graphic showing automation potential and wages for US jobs:

CXOTALK McKinsey - automation, AI, and wages

CXOTALK McKinsey - automation, AI, and wages

Image from McKinsey Global Institute

The conversation between Michael Chui and David Bray covered key points about the relationship of business and the workforce to automation and AI – including investment, planning, and even ethical considerations.

You can watch our entire conversation in the video embedded above. An edited partial transcript is available below and you can read the complete transcript at the CXOTALK site.

How should organizations think about investing in AI?

Michael Chiu: More organizations have started to understand the potential of data analytics. Executives are starting to understand that data and analytics are either becoming a basis of competition or a basis for offering the services and products that your customers, citizens, and stakeholders need.

While there are often real technology challenges, we often find the real barrier is the people stuff. How do you get from an interesting experiment to business-relevant insight? We could increase the conversion rate by X percentage if we used this next product to buy an algorithm and this data; we could reduce the maintenance costs, or increase the uptime of this whole good. We could, in fact, bring more people into this public service because we can find them better.

Getting from that insight to capture value at scale is where organizations are either stuck or falling. How do you bag that interesting insight, that thing that you capture, whether in it’s in the form of a machine learning algorithm, or other types of analytics, into the practices and processes of an organization, so it changes the way things operate at scale? To use a military metaphor: How do you steer that aircraft carrier? It’s as true for freight ships as it is for military ships. They are hard things to turn.

It’s the organizational challenge of understanding the mindsets, having the right talent in place, and then changing the practices at scale. That’s where we see a big difference between organizations who have just reached awareness and maybe done something interesting and ones who have radically changed their performance in a positive way through data, analytics, and AI.

What are the adoption problems around AI and machine learning?

David Bray: The real secret to success is changing what people do in an organization, that you can’t just roll out technology and say, “We’ve gone digital, but we didn’t change any of our business processes,” and expect to have any great outcomes. I have seen experiments that are isolated from the rest of public service; and they say, “Well look, we’re doing these experiments over here!” but they’re never translating to changing how you do the business of public service at scale.

Doing that requires not just technology, but understanding the narrative of how the current processes work, why they’re being done that way in an organization, and then what is the to-be state, and how are you going to be that leader that shepherds the change from the as-is to the to-be state? For public service, we probably lack conversations right now about how to deliver results differently and dramatically better to the public.

Artificial intelligence, in some respects, is just a continuation of predictive analytics, a continuation of big data, it is nothing new because technology always changes the art of the possible; this is just a new art of the possible.

I do think there’s an interesting thing in which it could offer a reflection of our biases through artificial intelligence. If we’re not careful, we’ll roll out artificial intelligence, populating it with data from humans, [and] we know humans have biases, and we’ll find out that the artificial intelligence itself, the machine learning itself, is biased. I think that’s a little bit more unique than just a predictive analytics bias or big data.

Which business areas most suited to AI?

Michael Chiu: When we surveyed about 600 different industry experts, every single one of those problems we identified, at least one expert suggested it was one of the top three problems that machine learning could help improve. And so, what that says is potential is just absolutely huge. There’s almost no problem where AI and machine learning potentially couldn’t change and improve performance.

A few things that come to mind: One is a lot of the most interesting and recent research has been in this field called “deep learning,” and that’s particularly suited for certain types of problems with pattern recognition, often images, etc. And so those problems that are like image recognition, pattern recognition, etc. are some of those that are quite amenable and interesting.

So again, regarding very specific types of problems, predictive maintenance is huge. The ability to keep something from breaking; rather than waiting until it breaks and then fixing it, the ability to predict when something’s going to break. Not only because it reduces the cost. More important, is the thing doesn’t go down. If you bring down a part of an assembly line, you bring down the entire factory or often the entire line.

To a certain extent, that is an example of pattern matching. Sensors are the signals that reflect that something’s going to break, informing you to do predictive maintenance. We find that across a huge number of specific industries that have these capital assets, whether it’s a generator, a building, an HDC system, or a vehicle, where if you’re able to predict ahead of time before something’s going to break, you should conduct some maintenance. That is one of the areas in which machine learning can be quite powerful.

Health care is another case of predictive maintenance but on the human capital asset. Then you can start to think, “Well gosh! I have the internet of things.” I have sensors on a patient’s body. Can I tell before they’re going to have a cardiac incident? Can I tell before someone’s going to have a diabetic incident? That they should take some actions which could be less expensive, and less invasive, than having it turn into an emergent case where they must go through a very expensive, painful, and urgent care type of situation?

Again, can you use machine learning make predictions? Those are some of the problems things that can potentially be solved better by using AI and machine learning.

David Bray: There are opportunities for artificial intelligence and machine learning to help the public. I think a lot is going to happen first in cities.

We’ve heard about smart cities. You can easily see better preventive maintenance on roads or power generation and then monitoring to avoid brownouts. I think the real practical, initial, early adoption of AI and machine learning is going to happen first at the city level. Then we’ve got to figure out how to best use it at the federal level.

CXOTALK brings together the most innovative leaders in the world for in-depth conversations about leadership and innovation. See the complete list of episodes.

(Cross-posted @ ZDNet | Beyond IT Failure Blog)