IT heroes: Use customer service to build business relationships

IT heroes Use customer service to build business relationships

Image from iStockphoto

The importance of building bridges between IT and business stakeholders is obvious, but getting there is hard.

In his book, Driving Digital, veteran CIO Isaac Sacolick describes the need to build these relationships from the bottom up:

The solutions and transformational practices that I helped implement across these organizations had more similarities than differences. They were all developed “bottom up,” that is, they were all implemented with tactical practices first, transformational practices second, and cultural changes ongoing. I started the practices in IT first, extended to business teams second, and then drove business change and strategic transformation later.

The not-so-subtle message is that decreeing relationship change simply does not work. Of course, senior leaders must set broad objectives, but goals alone do not create results. Making pronouncements is easy, but actually driving deep cooperation across departments is more difficult.

The need to change is part of a broader shift inside IT, striving to say “yes” in response to user requests rather than the traditional “default to no” mentality.

During a recent conference, I spoke with several IT managers — these are not CIOs – who are using customer service as the model for defining relationships between business stakeholders and IT. FinancialForce invited me to their Community Live 2017 event in Las Vegas to record these conversations as part of the CXOTalk series of conversations with innovators.

The first conversation I want to highlight explored innovation and project management with IT leaders from Medidata Solutions, a life sciences technology provider that offers a cloud-based solution for managing clinical trials and drug research.

I held the second discussion with Microstrategy, a well-known business intelligence software company.

Medidata Solutions: Project management and innovation

The conversation with Medidata brought together Michael Shullich, Senior Director, and Naimisha Kollu, Senior Information Systems Project Manager, both of whom work in the Business Innovation Office.

Watch our entire conversation in the following video and read edited comments below. You can see the whole transcript as well.

Tell us about Medidata Solutions

Michael Shulich: Medidata was founded in 1999. We went public in 2009. We were born in the cloud. We’re a software company. Our specialty is in the vertical of software and life sciences. So, our customers are pharmaceutical companies, medical device companies, basically what our customers are doing are running clinical trials to create breakthrough drugs to help patients. We create the architecture that they run their trials on, and we accelerate that.

Why is the innovation organization part of IT?

Michael Shulich: I sit in the technology stack, and the technology stack makes up a lot of our company. As a cultural attribute, we’re looking to transform our industry. We invest 25% of our revenue back into R&D. We’re long-term-focused, so what we’re doing for the clients in terms of transforming the industry, I try to do internally. So, behind professional services, our legal people, our HR people, making sure they have the best tools, systems, processes; so they can execute their mission.

We create new products, we change the way clinical trials are done. That’s baked into our technology. We’re not looking to just enable. We’re looking to transform. We’re very passionate about what we do. We are helping create new drugs, new treatments… Our mission is to power smarter treatments and healthier people. You can’t do that by doing what you used to do. And you can’t do it being incremental. So, we have kind of the external focus what we do with our clients. That’s my job, for internal focus, is to help Metadata run better.

Naimisha Kollu: We empower our users. The change doesn’t come from outside. It has to come from within the organization. So that’s what we do. We bridge the gap between the organizational departments. We bring technology and the processes together to make that transformation.

How does this view influence project management?

Naimisha Kollu: As Mike is coming from a technology stack, I’m coming from a PMO stack. We both are hand-in-hand organizations that go in parallel. And again, we are from the same business innovation office, just two different streams of lines.

As a member of the project management office, my responsibility is not just project management. We are not typical project managers which you see in any organization. We don’t just facilitate the meetings. We don’t just manage the projects. What we do and what we bring to the table is our experience with these applications. What we bring to the table is our experience with our business processes that go within the organization.

If a professional services department wants to implement a tool, we know which tool to bring into the architecture because we have experience with our other applications within our infrastructure; and we will be able to guide them in the right way, which is going to be a better fit culturally and also from our architecture standpoint. So, as a project manager, we bring and bridge the gap between the departments, between the teams, between the technologies, and the processes.

Michael Shulich: She is a subject matter expert. She can go toe-to-toe. To get a transformation, you have to know what the current state is, and you have to have a vision what the future state is. And she is excellent at being able to kind of map out the way it is today to the future. And, by doing that too, I think you get the respect of your internal customers. Because let’s be honest, change is frightening, right? Especially if you got all these things going on, and change kind of has to be managed in a process.

How do handle implementation and adoption?

Naimisha Kollu: Our responsibility doesn’t end just with implementation. We don’t dust our hands after implementation and just vanish. Our responsibility also includes adoption. We won’t be successful without our users adopting the technology; adopting the process. So, we incorporate that adoption as well into our responsibility and we don’t leave them. They trust us because they know we won’t leave them until they are comfortable.

Adoption requires an incremental approach and we cater to different types of audiences. Ours is a global organization which has spread across different regions, geographies. We have an internal team called “Merit Academy,” which is targeted towards training our external customers on our products, but they also actually help us to train our internal users on the products that we use internally.

We use different methods to support adoption. We do live webinars; we do lunch and learns; we do quick short videos; we do roadshows for adoption. We have question and answer sessions.

Microstrategy: Building a services-focused IT organization

I also spoke with Farnaz Bengali, Vice President of Enterprise Applications at MicroStrategy. Watch our entire conversation in the following video and read her edited comments below. You can see the whole transcript as well.

Watch our entire conversation in the following video and see her edited comments below. You can also read the whole transcript of my conversation with her.

Tell us about MicroStrategy and your role?

Farnaz Bengali: MicroStrategy is an enterprise software company, and we are the best BI tool out there. We’re the original BI tool.

I work under the CIO’s office, and I manage all of our software applications. I am a part of IT, but funny enough, I have no IT background. I came up from an accounting and consulting background, and they really wanted somebody for this job that could help them modernize their applications, which is my role. I help optimize the business processes for everybody, every other department in the company.

For example, Marketing says, “Hey, we can’t get leads out to our internal reps fast enough. Can you implement X tool.” I’ll think through and help them say, “Okay, before I implement the new, shiny tool that you’ve heard about or watched a YouTube video on, walk me through what is your leads process? What tools do we use today? Which people are involved? What is the process?” And, then I will help them tailor a solution.

It may be a new tool. It may be optimizing something we have currently. Or, it may just be a business process change. Do we really need a new software application? Or, is it something we can just tweak in something current, or a business process?

How do you think about the service aspect of IT?

Farnaz Bengali: I’m trying to make IT a services organization. We should treat [IT customers] as if they were external paying customers.

Without that customer service hat on, most of us wouldn’t be employed.

What are the challenges and opportunities in rethinking IT from a service perspective?

Farnaz Bengali: Making sure the business understands the value and the proposition we bring to the table. I’ve ensured that I have the right IT people in the organization. I have also hired a marketing person, somebody who’s got a finance hat on, somebody who’s got an accounting hat on, a sales hat; we’re bringing that expertise to the table from a decision-making perspective.

It’s been very successful and we’re not just back-end people implementing systems anymore.

If you were to hold a product conference, like the one we’re at right now, you would bring sales and marketing into the table. You’d try to think about what customers you want there, who you’re marketing to, all those things. I’m also trying to bring IT to that table; the internal IT department. You may be looking at three different venues. We can help you understand which of your venues will accommodate the people that you need from a wireless and infrastructure perspective. We can also think through what kind of support you’ll need at the conference so we can bring that expertise to the table. IT will help focus the decision because now, you’ve got more facts.

What is your final advice?

Farnaz Bengali: Hire the right skill sets. Ensure that you’re able to scale properly; don’t do too much, too fast. And, focus on the process.

CXOTalk brings you the world’s most innovative business leaders, authors, and analysts for in-depth discussion unavailable anywhere else. Thank you to FinancialForce for supporting CXOTalk.

(Cross-posted @ ZDNet | Beyond IT Failure Blog)

Equifax : Disturbing Developments

Credit-rating company Equifax’s data breach, which involves an estimated 143 million people, may not be the largest that we have seen, after all Yahoo lost billions of emails in a series of data breaches, most of them brought out to the public much later. Given the nature of data at play here, this is clearly becoming the most deadly of all data breaches – like tropical storm IRMA. The feared IRMA is on its way to touch Florida(pray for the safety of all living in Florida), but the Equifax data breach storm has the potential to affect large set of Americans to become the worst ever data breach. The sensitivity and the direct information access makes the big difference between the two. Yahoo email’s could have exposed some back account information or some personal information and the hacker must be scanning so wide to find relevance that can be good enough for him/her to benefit from, An occasional back account, credit card information might have been exposed. The odds are stacked against given the volume of information to sift and find the useful ones. But Equifax is a different ball game altogether. Equifax is one of the three biggest credit-reporting companies in the U.S. and incidentally the breach is reported to have occurred occurred mid-May through July 2017,even though the public got to know about this on Sep 7, 2017

According to Equifax, the information that were accessed includes: Names Social Security numbers Birth dates Addresses In some instances, driver’s license numbers. In addition, credit card numbers for north of 200,000 U.S. consumers Certain dispute documents with personal identifying information for north of 180,000 U.S. consumers Come to think of it, this is precisely the information US residents share with a bank to get credit cards, get certain types of jobs, or get a mortgage. This is important information, and a seem to have affected a wide number of Americans. The class-action lawsuits are already being filed less than 24 hours after the information became public.

Equifax’ s response so far has been so pathetic and uncaring, to say the least.

To start with, Equifax did not share news of the breach for several weeks, since it began investigating this breach. Second, three top executives sold stocks between the time Equifax knew about this and the time they shared the news with Public. Public sources reveal chief financial officer John Gamble sold $946,374 worth of stock, president of U.S. information solutions Joseph Loughran sold $584,099 worth of stock, and Rodolfo Ploder, president of workforce solutions, sold $250,458. It may be the case that these gentlemen were not aware when they sold, (looks difficult to believe though, as the CFO and the IT head were not aware of data breach of this magnitude happening but as key executives inside Equifax they could not find out what the matter was!)but clearly Equifax is defensive based on popular perception on this ground. Equifax has since set up a web page with information and a way to enroll in “complimentary identity theft protection and credit file monitoring services and how to find out if your personal information may have been impacted.” The site requires you to enter your last name and the last six digits of your social security number, and Equifax won’t tell you right away if you’ve been impacted but the site promises to let you know when you can enroll in the company’s “TrustedID Premier” program” and tells you to “mark your calendar” to check back. And some security experts were concerned about the basic setup of the site and that even there, new set of data breaches can happen. Many trade sources complained that “Customer service agents contacted by phone on the emergency telephone line said they couldn’t provide further clarity on the matter.” And the people fielding those calls, were telling callers that that they don’t have access to the database of those affected. What next to do

The options in front of the affected person is indeed very limited. There’s the standard advice after a data breach: Change passwords if you reuse the same one , turn on two-factor authentication when possible, and watch for any suspicious links or emails from Equifax or others. Some suggested freezing the credit score, so that external players cannot access such information till this is waived. You can also turn to the other big two credit-reporting agencies in the U.S., Experian and TransUnion, and make sure there haven’t been any recent inquiries made into your credit history. Equifax is giving away a free year of credit monitoring and identity-theft insurance, which everyone is highly encouraged to take advantage of. For those who are already a victim of identity theft, are encouraged to visit the FTC Identity Theft Recovery website and follow the steps therein. The Federal Trade Commission will provide the victim with a specific identity theft report and “to-do” recovery plans

On an ongoing basis, ensure that one spends the time keeping a closer eye on credit-card statements – the newly issued cards may be more exposed. Don’t leave any financial statement archive without your express approval. .At the end of it, it is clear that a tremendous amount of data is now floating out there with someone not authorized to hold them – either in the hands of criminals or a nation-state. Your Social Security number will never change, your past addresses will always be your past addresses. The effects of the Equifax breach will be felt for years to come. Beware of phishing mails which are clickbaits to draw one into rogue schemes and keep your machines remain state of the art and all patches applied. One of the things I find disturbing about this data breach is that there is essentially nothing any of us could have done to protect ourselves. We’re told to have strong passwords, avoid risk sites and apps and use security software but that only protects our devices, not data stored by others. And, in the case of Equifax and other credit reporting bureaus, it’s not as if we’ve even chosen to do business with these companies. They collect and store sensitive data about us whether we like it or not and I’m even sure if there is a way to opt-out.Increasingly, companies are supposed to safeguard this information, but they’re subject to hacks, human error and even deliberate breaches from within. Medicare even puts recipients social security number on their card, which they usually care in their wallet so if their wallet is stolen, their identify is at risk. Medicare plans to change this next year, but in the meantime millions of people over 62 are vulnerable. We need to figure out a way to disempower the use of the social security number to steal our identities. I’m not sure how that can be done, but I’m pretty sure it’s doable.

Some hacker is reportedly trying to take advantage of this development. I also demand a national center of cyber breaks and all enterprises should have an annual checkup of their enterprises and would act as the clearinghouse for providing national relief. Hope America comes out of this unscathed.

(Cross-posted @ Sadagopan’s weblog on Emerging Technologies,Thoughts, Ideas,Trends and The Flat World)

AI on the high seas: Royal Caribbean sets a course for “frictionless and immersive” vacations

Royal Caribbean, the world’s second largest cruise line, operates in 47 different countries with over 50 ships, each of which is a floating city transporting and entertaining between 2,500 to 7,000 guests at a time. Running a cruise line at this scale presents massive logistical challenges.

I caught up with Royal Caribbean’s Chief Information Officer Mike Giresi at the Digital Workforce Summit, held in New York City by software company IPsoft. The event’s theme was using AI and cognitive learning to automate and improve customer service — thus the idea of digital workforce.

This video is part of the CXOTalk series of conversations with the world’s top innovators. You can watch it embedded above and see the complete transcript on the CXOTalk video page.

Royal Caribbean is undertaking a large digital transformation initiative to rethink the guest experience. According to CIO Giresi, Royal Caribbean’s goal is providing guests with a personalized experience that is also easy to understand. In his words, to create a “frictionless and immersive vacation experience for our guests:”

Our intent is to make it as simple as possible for you to understand the product, to be able to select what the product represents to you and experience the product once you’re on the actual, physical ship.

Customer value comes first. Digital transformation starts with the question, “What do our customers want?” In the case of Royal Caribbean, there are two crucial points.

First, the company wants to help customers visualize and understand, at a visceral, emotional level, the positive life experience of being on a Royal Caribbean cruise. Because customers have different goals, communicating this message meaningfully is hard. For example, one cruise shopper may want a peaceful getaway on the sea while another desires hot nightlife: Two buyers, each seeking their own unique experience.

Second, Royal Caribbean believes its primary job is making the cruise experience fast, easy, and fun. Mike spoke about creating “frictionless and immersive vacations.” To do this, the company uses technology to make life simple and engaging for guests.

The term frictionless also implies operational efficiency. Consider the practical challenges associated with boarding and un-boarding thousands of passengers quickly and without incident from a cruise liner. Or, the difficulty in offering computing and data services while in the middle of an ocean, thousands of miles from land. Customer experience demands that Royal Caribbean solve these issues every single day.

The foundation issue is rethinking the entire cruise experience by answering the question, “What do customers care about most?”

Technology enables customer experience. Having set priorities based on what matters to customers, the business can use technology to enable outcomes that customers desire.

Giresi explains:

Technology provides the entire guest experience. We’re modernizing our technology to enable the guest to have much more control and direct selection of what they want to do with the product itself; moving from reservation being the center of our universe, to the guest being the center of our universe, and then building capability services integration points.

[We are] enabling technology to move with the guests versus the guests having to traverse different monolithic and antiquated systems and ultimately feel like nothing is purposely put together.

With customer experience as the reference point, determining priorities for making technology investment decisions becomes easier. Defining customer priorities as the reference also aligns IT activities with business strategy, which obviously is of huge value to the company.


Here is an edited transcript of our conversation.

What are your customer goals?

The more we can do with the product, enabling both guest and customer experiences, if you will, but doing so in a way that broadens the ship. Like, how do we expand the vacation experience beyond the ship, so that you’re not constrained by the physical limitations of the ship? That’s the design around the technology strategy.

We want people to feel like coming to a cruise is not an overwhelming or intimidating experience. We want people to feel confident that as soon as they get on the ship, their vacation begins. In fact, we’d love their vacation to begin before they arrive. Once you enter the port to walk onto the ship, we want it to be as seamless as humanly possible. We want you to enjoy it, feel relaxed, be excited; you have your itinerary, you have your agenda if you will; you know all the things you’re going to do. If you learn of new things, how easy is it to change that, and can swiftly and agilely adapt to whatever is available to you, to maximize that experience.

Can you give us an example?

So, using augmented reality, or virtual reality, to bring experiences onto the ship that you would not be able to see; where you would not be able to experience because the ship has physical limitations, so people can understand what’s happening with the ship, doing interesting things with social. Enabling people to self-select opportunities to go on excursions that may not have been available to them in personalizing that information, so they can get to the things that are of most interest to them.

We believe we are in the business of making tremendous memories. The better we can provide that information to you, the more successful we’re going to be in providing the product.

When are rolling out this technology?

We’re in early days. We’ve gone through a lot of the heavy lifting from a foundational capability perspective.

When you think about a ship, you have a bunch of people, obviously guests on the ship, but there’s a lot of crew on the ship, and there are a lot of supply chain processes. What it takes to run one of these floating cities is no different than what it takes to run a city. You’re just running it at sea.

Each time that ship comes into a port, each time it does something, there’s an opportunity to change and/or impact the experience. So, how do we make sure we maximize our processes and people in support of this program so that people feel like it’s something of value?

What are you doing with AI?

We are looking at two aspects of AI. One is our actual workforce. How we can offer better information, and help them ensure that they are making every guest interaction — whether in our call center or our crew interacting with our guests — that those interactions are high quality and driving a great experience.

We believe there’s an opportunity to provide guests with more personalized information, with more options that are relevant to their interests, and the more authentic it feels to someone, people will be friendlier to it and feel less intimidated by the overall process.

AI enables us to quickly move those issues to the point of solution much faster and proactively resolve issues before they become issues.

When we turn a ship, it’s much like a plane. It’s just a lot more complicate. Our ability to disembark people off that ship, invite the new guests onto the ship, and do that in a successful and high-quality manner, is critical to the success of the journey.

Where we’re looking at AI, it is around the consumer experience. When you come to a cruise site, the amount of data that’s available to you is voluminous, I mean, there’s so much information.

If we know a little bit about you, and we understand what you’re interested in, we can deliver that information in a much more personalized manner, to call center, crew.

How do we get better information to people so they can service the guests and help guests maximize their interaction with the business?

Obviously, we think we can help convert and acquire people more effectively by understanding behavioral trends and historical activities.

And, for our crew, it’s about giving them the right information when they most need it to provide the right level of service to our guests.

CXOTalk brings you the world’s most innovative business leaders, authors, and analysts for in-depth discussion unavailable anywhere else. Thank you to IPsoft for being a CXOTalk underwriter.

(Cross-posted @ ZDNet | Beyond IT Failure Blog)

Time to Drop LastPass? The Most Irresponsible, Idiotic Post Ever!


I have not blogged for a while, but I feel my blood is boiling at such an irresponsible call by MakeUseOf, a site I used to respect:

It’s Time to Drop LastPass: How to Migrate to a Better Password Manager

As a long-time LastPass user, my first reaction was panic: yet another breach?  But no such thing.  This article simply goes on providing step-by-step instructions  on how to switch from LastPass to  1PasswordDashlane, or KeePass, should you so wish.  But why?  Who knows.  I can’t find any reference to a new LastPass breach, bug, or any issue not already known/remedied a long time ago.

But why?  Who knows.  I can’t find any reference to a new LastPass breach, bug, or any issue not already known/remedied a long time ago.   What I do know is that the safety, integrity of the central password manager is critical for all of us, and moving to another provider is …well, kind of a Big Deal.

Which is why such crap calls are utterly irresponsible.  I lost all respect for MakeUseOf.

</rant>  (and I am feeling better now)


​Artificial intelligence and privacy engineering: Why it matters NOW

As artificial intelligence proliferates, companies and governments are aggregating enormous data sets to feed their AI initiatives.

Although privacy is not a new concept in computing, the growth of aggregated data magnifies privacy challenges and leads to extreme ethical risks such as unintentionally building biased AI systems, among many others.

Privacy and artificial intelligence are both complex topics. There are no easy or simple answers because solutions lie at the shifting and conflicted intersection of technology, commercial profit, public policy, and even individual and cultural attitudes.

Given this complexity, I invited two brilliant people to share their thoughts in a CXOTALK conversation on privacy and AI. Watch the video embedded above to participate in the entire discussion, which was Episode 229 of CXOTALK.

Michelle Dennedy is the Chief Privacy Officer at Cisco. She is an attorney, author of the book The Privacy Engineer’s Manifesto, and one of the world’s most respected experts on privacy engineering.

David Bray is Chief Ventures Officer at the National Geospatial-Intelligence Agency. Previously, he was an Eisenhower Fellow and Chief Information Officer at the Federal Communications Commission. David is one of the foremost change agents in the US federal government.

Here are edited excerpts from the conversation. You can read the entire transcript at the CXOTALK site.

What is privacy engineering?

Michelle Dennedy: Privacy by Design is a policy concept that was hanging around for ten years in the networks and coming out of Ontario, Canada with a woman named Ann Cavoukian, who was the commissioner at the time of Ontario.

But in 2010, we introduced the concept at the Data Commissioner’s Conference in Jerusalem, and over 120 different countries agreed we should contemplate privacy in the build, in the design. That means not just the technical tools you buy and consume, [but] how you operationalize, how you run your business; how you organize around your business.

And, getting down to business on my side of the world, privacy engineering is using the techniques of the technical, the social, the procedural, the training tools that we have available, and in the most basic sense of engineering to say, “What are the routinized systems? What are the frameworks? What are the techniques that we use to mobilize privacy-enhancing technologies that exist today, and look across the processing lifecycle to build in and solve for privacy challenges?”

And I’ll double-click on the word “privacy.” Privacy, in the functional sense, is the authorized processing of personally-identifiable data using fair, moral, legal, and ethical standards. So, we bring down each one of those things and say, “What are the functionalized tools that we can use to promote that whole panoply and complicated movement of personally-identifiable information across networks with all of these other factors built in?” [It’s] if I can change the fabric down here, and our teams can build this in and make it as routinized and invisible, then the rest of the world can work on the more nuanced layers that are also difficult and challenging.

Where does privacy intersect with AI?

David Bray: What Michelle said about building beyond and thinking about networks gets to where we’re at today, now in 2017. It’s not just about individual machines making correlations; it’s about different data feeds streaming in from different networks where you might make a correlation that the individual has not given consent to with […] personally identifiable information.

For AI, it is just sort of the next layer of that. We’ve gone from individual machines, networks, to now we have something that is looking for patterns at an unprecedented capability, that at the end of the day, it still goes back to what is coming from what the individual has given consent to? What is being handed off by those machines? What are those data streams?

One of the things I learned when I was in Australia as well as in Taiwan as an Eisenhower Fellow; it’s a question about, “What can we do to separate this setting of our privacy permissions and what we want to be done with our data, from where the data is stored?” Because right now, we have this more simplistic model of, “We co-locate on the same platform,” and then maybe you get an end-user agreement that’s thirty or forty pages long, and you don’t read it. Either accept, or you don’t accept; if you don’t accept, you won’t get the service, and there’s no opportunity to say, “I’m willing to have it used in this context, but not these contexts.” And I think that means Ai is going to raise questions about the context of when we need to start using these data streams.

How does “context” fit into this?

Michelle Dennedy: We wrote a book a couple of years ago called “The Privacy Engineer’s Manifesto,” and in the manifesto, the techniques that we used are based on really foundational computer science.

Before we called it “computer science” we used to call it “statistics and math.” But even thinking about geometric proof, nothing happens without context. And so, the thought that you have one tool that is appropriate for everything has simply never worked in engineering. You wouldn’t build a bridge with just nails and not use hammers. You wouldn’t think about putting something in the jungle that was built the same way as a structure that you would build in Arizona.

So, thinking about use-cases and contexts with human data, and creating human experiences, is everything. And it makes a lot of sense. If you think about how we’re regulated primarily in the U.S., we’ll leave the bankers off for a moment because they’re different agencies, but the Federal Communications Commission, the Federal Trade Commission; so, we’re thinking about commercial interests; we’re thinking about communication. And communication is wildly imperfect why? Because it’s humans doing all the communicating!

So, any time you talk about something that is as human and humane as processing information that impacts the lives and cultures and commerce of people, you’re going to have to really over-rotate on context. That doesn’t mean everyone gets a specialty thing, but it doesn’t mean that everyone gets a car in any color that they want so long as it’s black.

David Bray: And I want to amplify what Michelle is saying. When I arrived at the FCC in late 2013, we were paying for people to volunteer what their broadband speeds were in certain, select areas because we wanted to see that they were getting the broadband speed that they were promised. And that cost the government money, and it took a lot of work, and so we effectively wanted to roll up an app that could allow people to crowdsource and if they wanted to, see what their score was and share it voluntarily with the FCC. Recognizing that if I stood up and said, “Hi! I’m with the U.S. government! Would you like to have an app […] for your broadband connection?” Maybe not that successful.

But using the principles that you said about privacy engineering and privacy design, one, we made the app open source so people could look at the code. Two, we made it so that, when we designed the code, it didn’t capture your IP address, and it didn’t know who you were in a five-mile-radius. So, it gave some fuzziness to your actual, specific location, but it was still good enough for informing whether or not broadband speed is as desired.

And once we did that; also, our terms and conditions were only two pages long; which, again, we dropped the gauntlet and said, “When was the last time you agreed to anything on the internet that was only two pages long?” Rolling that out, as a result, ended up being the fourth most-downloaded app behind Google Chrome because there were people that looked at the code and said, “Yea, verily, they have privacy by design.”

And so, I think that this principle of privacy by design is making the recognition that one, it’s not just encryption but then two, it’s not just the legalese. Can you show something that gives people trust; that what you’re doing with their data is explicitly what they have given consent to? That, to me, is what’s needed for AI [which] is, can we do that same thing which shows you what’s being done with your data, and gives you an opportunity to weigh in on whether you want it or not?

Does AI require a new level of information security?

David Bray: So, I’ll give the simple answer which is “Yes.” And now I’ll go beyond that.

So, shifting back to first what Michelle said, I think it is great to unpack that AI is many different things. It’s not a monolithic thing, and it’s worth deciding are we talking about simply machine learning at speed? Are we talking about neural networks? This matters because five years ago, ten years ago, fifteen years ago, the sheer amount of data that was available to you was nowhere near what it is right now, and let alone what it will be in five years.

If we’re right now at about 20 billion networked devices on the face of the planet relative to 7.3 billion human beings, estimates are at between 75 and 300 billion devices in less than five years. And so, I think we’re beginning to have these heightened concerns about ethics and the security of data. To Scott’s question: because it’s just simply we are instrumenting ourselves, we are instrumenting our cars, our bodies, our homes, and this raises huge amounts of questions about what the machines might make of this data stream. It’s also just the sheer processing capability. I mean, the ability to do petaflops and now exaflops and beyond, I mean, that was just not present ten years ago.

So, with that said, the question of security. It’s security, but also we may need a new word. I heard in Scandinavia, they talk about integrity and being integral. It’s really about the integrity of that data: Have you given consent to having it used for a particular purpose? So, I think AI could play a role in making sense of whether data is processed securely.

Because the whole challenge is right now, for most of the processing we have to decrypt it at some point to start to make sense of it and re-encrypt it again. But also, is it being treated with integrity and integral to the individual? Has the individual given consent?

And so, one of the things raised when I was in conversations in Taiwan is the question, “Well, couldn’t we simply have an open-source AI, where we give our permission and our consent to the AI to have our data be used for certain purposes?” For example, it might say, “Okay, well I understand you have a data set served with this platform, this other platform over here, and this platform over here. Are you willing to have that data be brought together to improve your housekeeping?” And you might say “no.” He says, “Okay. But would you be willing to do it if your heart rate drops below a certain level and you’re in a car accident?” And you might say “yes.”

And so, the only way I think we could ever possibly do context is not going down a series of checklists and trying to check all possible scenarios. It is going to have to be a machine that can talk to us and have conversations about what we do and do now want to have done with our data.

What about the risks of creating bias in AI?

Michelle Dennedy: Madeleine Clare Elish wrote a paper called “Moral Crumple Zones,” and I just love even the visual of it. If you think about cars and what we know about humans driving cars, they smash into each other in certain known ways. And the way that we’ve gotten better and lowered fatalities of known car crashes is using physics and geometry to design a cavity in various parts of the car where there’s nothing there that’s going to explode or catch fire, etc. as an impact crumple zone. So all the force and the energy goes away from the passenger and into the physical crumple zone of the car.

Madeleine is working on exactly what we’re talking about. We don’t know when it’s unconscious or unintentional bias because it’s unconscious or unintentional bias. But, we can design-in ethical crumple zones, where we’re having things like testing for feeding, just like we do with sandboxing or we do with dummy data before we go live in other types of IT systems. We can decide to use AI technology and add in known issues for retraining that database.

I’ll give you Watson as an example. Watson isn’t a thing. Watson is a brand. The way that the Watson computer beat Jeopardy contestants is by learning Wikipedia. So, by processing mass quantities of stated data, you know, given whatever levels of authenticity that pattern on.

What Watson cannot do is selectively forget. So, your brain and your neural network are better at forgetting data and ignoring data than it is for processing data. We’re trying to make our computer simulate a brain, except that brains are good at forgetting. AI is not good at that, yet. So, you can put the tax code, which would fill three ballrooms if you print it out on paper. You can feed it into an AI type of dataset, and you can train it in what are the known amounts of money someone should pay in a given context?

What you can’t do, and what I think would be fascinating if we did do, is if we could wrangle the data of all the cheaters. What are the most common cheats? How do we cheat? And we know the ones that get caught, but more importantly, how do […] get caught? That’s the stuff where I think you need to design in a moral and ethical crumple zone and say, “How do people actively use systems?”

The concept of the ghost in the machine: how do machines that are well-trained with data over time experience degradation? Either they’re not pulling from datasets because the equipment is simply … You know, they’re not reading tape drives anymore, or it’s not being fed from fresh data, or we’re not deleting old data. There are a lot of different techniques here that I think have yet to be deployed at scale that I think we need to consider before we’re overly relying [on AI], without human checks and balances, and processed checks and balances.

How do we solve this bias problem?

David Bray: I think it’s going to have to be a staged approach. As a starting point, you almost need to have the equivalent of a human ombudsman – a series of people looking at what the machine is doing relative to the data that was fed in.

And you can do this in multiple contexts. It could just be internal to the company, and it’s just making sure that what the machine is being fed is not leading it to decisions that are atrocious or erroneous.

Or, if you want to gain public trust, share some of the data, and share some of the outcomes but abstract anything that’s associated with any one individual and just say, “These types of people applied for loans. These types of loans were awarded,” so can make sure that the machine is not hinging on some bias that we don’t know about.

Longer-term, though, you’ve got to write that ombudsman. We need to be able to engineer an AI to serve as an ombudsman for the AI itself.

So really, what I’d see is not just AI as just one, monolithic system, it may be one that’s making the decisions, and then another that’s serving as the Jiminy Cricket that says, “This doesn’t make sense. These people are cheating,” and it’s pointing out those flaws in the system as well. So, we need the equivalent of a Jiminy Cricket for AI.

CXOTALK brings you the world’s most innovative business leaders, authors, and analysts for in-depth discussion unavailable anywhere else. Enjoy all our episodes and download the podcast from iTunes and Spreaker.

(Cross-posted @ ZDNet | Beyond IT Failure Blog)

Augmented reality: Field service proof points in the enterprise

Augmented reality

Image supplied by Oracle

Although we read about augmented reality in the popular press, the focus tends toward consumer expereinces like Pokemon Go and Snapchat. Although the consumer side of AR is huge, there are important applications in the enterprise.

Because the conference was in Las Vegas, the demo showed a field service technician using AR on a mobile phone to repair a broken slot machine. The demo is instantly compelling because of the visuals and shows a practical enterprise use case for augmented reality.

A presentation at Oracle’s recent Modern Customer Experience conference demonstrated augmented reality applied to field service management. It’s one example where the value of AR is obvious and dramatic.

Although the field service management industry has been innovating around knowledge delivery to technicians for decades, powerful mobile hardware combined with ubiquitous connectivity and AR software changes the game.

ALSO READ: Augmented reality: An enterprise business imperative

To gain an in-depth view of how AR is changing field service, I put questions to Shon Wedde, Oracle’s Senior Director of CX Product Management, and Joshua Bowcott, Product Manager, Oracle Service Cloud. They also captured the sequence of screens in the gallery embedded above.

When is augmented reality most suitable for field service?

Customers adopt AR for various field service applications across all industries. Traditionally, AR emerged where massive pieces of equipment were used — like in oil and gas — as well as in M2M (machine-to-machine) situations, along with factory assembly lines. Companies selling complex and connected equipment across industries like manufacturing, medical, and automotive industries, have realized the importance adopting AR for field service.

Augmented reality is most suitable when it involves connected, complex equipment in a data-rich environment.

The concept of AR has been around for years. What’s new is our ability to take IoT and customer service technologies, such as policy automation and workflow, and integrate them into an AR scenario. Policy automation guides dynamic animation, and IoT data provide real-time feedback, creating a rich environment for AR and field service technicians to work.

We should also note that AR applications go beyond field service scenarios-enriching not only B2B and B2C interactions, but also internal company training, self-service, and assisted-service experiences as well. We explain those in more detail below.

What type of equipment does the field service technician need?

A field service technician can use any mobile device, including cell phones, tablets, goggles, etc.

How does the equipment vendor create the augmented reality content used by field service technicians?

AR content relies on information that already exists. Companies like PTC ThingWorx utilize existing product CAD drawings, scaling them to match real life animation with PTC software. A field service technician will pull existing information from the contact center’s knowledge, base, as they do today.

For example, an AR-equipped mobile device can “point” at a connected piece of equipment, such as a slot machine, and determine its make and model. The slot machine’s problems are also transferred via IoT data. The system uses this information to filter the contact center’s existing knowledge base for articles that pertain to this particular instance, eliminating the technician’s need to manually figure out the machine’s make, model, and where the problems originate.

From there, service solutions like Policy Automation guide a technician step-by-step with animation, to resolve the slot machine’s issue. If a replacement part is needed, a technician can use integrated commerce functionalities to order that part, specific to the machine’s make and model. Finally, the entire experience is captured and logged alongside the customer’s profile and history with the device and the company.

We should note that the Oracle Policy Automation interview and the user’s answers dictate which AR experience is loaded. [The gallery embedded above] only shows one path [of many possibilities.]

What are the primary applications today for augmented reality in field service?

Companies across all industries are using AR, especially in those maintaining assets. They are B2B as well as B2C companies. Oracle has seen use cases from wind farms, control systems, and medical equipment to household appliances and motorcycle manufacturers.

AR applications extend far beyond field service. There’s a massive shift underway as AR emerges as a new consumer user interface. Facebook is now delivering AR as part of its core platform and we see mainstream AR technology in apps like Snapchat, for example.

Consumers are becoming increasingly comfortable with AR as a new self-service channel. For example, a consumer wouldn’t call a field service technician to their house to fix a coffee machine. Instead, AR would walk the consumer through steps on his or her mobile device or tablet to diagnose the coffee machine issue and then either change a coffee filter or click to buy a new coffee filter. This AR scenario isn’t designed to enable an agent. Instead, it’s bringing the consumer directly into the self-service experience. He or she can interact with an agent via chat or video chat right on the device, as an assisted-service experience if needed.

Furthermore, combining AR with other Oracle technologies enables businesses to service equipment faster, without the need to dispatch a technician or expert, allowing for quicker resolution. One expert technician can capture an entire installation or service experience with AR and shared as a virtual reality training program, available to a company’s employees anytime and anywhere in the world.

Disclosure: Oracle is a consulting client

(Cross-posted @ ZDNet | Beyond IT Failure Blog)