Artificial Intelligence Archives - Reader Techlines
Business-intelligence

2.jpg

  1. AI permeation. Artificial intelligence (AI), largely manifesting through machine learning algorithms, isn’t just getting better. It isn’t just getting more funding. It’s being incorporated into a more diverse range of applications. Rather than focusing on one goal, like mastering a game or communicating with humans, AI is starting to make an appearance in almost every new platform, app, or device, and that trend is only going to accelerate in 2018. We’re not at techno-pocalypse levels (and AI may never be sophisticated enough for us to reach that point), but by the end of 2018, AI will become even more of a mainstay in all forms of technology.

  2. Digital centralization. Over the past decade, we’ve seen the debut of many different types of devices, including smartphones, tablets, smart TVs, and dozens of other “smart” appliances. We’ve also come to rely on lots of individual apps in our daily lives, including those for navigation to even changing the temperature of our house. Consumers are craving centralization; a convenient way to manage everything from as few devices and central locations as possible. Smart speakers are a good step in the right direction, but 2018 may influence the rise of something even better.

  3. 5G preparation. Though tech timelines rarely play out the way we think, it’s possible that we could have a 5G network in place—with 5G phones—by the end of 2019. 5G internet has the potential to be almost 10 times faster than 4G, making it even better than most home internet services. Accordingly, it has the potential to revolutionize how consumers use internet and how developers think about apps and streaming content. 2018, then, is going to be a year of massive preparation for engineers, developers, and consumers, as they gear up for a new generation of internet.

  4. Data overload. By now, every company in the world has realized the awesome power and commoditization of consumer data, and in 2018, data collection is going to become an even higher priority. With consumers talking to smart speakers throughout their day, and relying on digital devices for most of their daily tasks, companies will soon have access to—and start using—practically unlimited amounts of personal data. This has many implications, including reduced privacy, more personalized ads, and possibly more positive outcomes, such as better predictive algorithms in healthcare.

 


c.jpg

A recent report on artificial intelligence (AI) by an Indian government think tank foresees the country as an AI hub for the developing world. Research analyst Shashank Reddy writes about the possibility of that happening.
India is the latest country to join the race to lead the AI revolution, which is still in the making. The world’s richest – and most powerful – countries have long been in this competition. It cuts across all spheres of national power, from the economy to the military, because the idea is that leadership in AI will enable global dominance.
The two biggest powers so far have been the United States and China, with each investing heavily in AI and its applications. So does India stand a chance?
Yes, according to a report released this month by think tank Niti Aayog.
Why automation could be a threat to India’s growth
The robots driving India’s online shopping boom
What India can offer

The report – which has been drafted as a “national strategy on AI” – admits that India lags significantly behind the superpowers in fundamental research and resources. Compared to the United States, it has fewer researchers and only a handful of dedicated laboratories and university departments. India also does not have tech giants such as Google and Amazon or behemoths like Baidu and Alibaba – all companies that can afford to invest in cutting-edge research.

But India enjoys crucial advantages too. It has a vast engineering workforce, a burgeoning start-up scene and an increasing amount of data as more people buy smartphones and go online.
The report itself is the latest in a slew of recent endeavours by the Indian government to encourage AI research. The federal government has created special committees to explore the possibilities AI offers in various sectors, from commerce to defence, as well as the issues that could arise from its widespread use. This year’s budget allocated money to develop a national AI strategy.


AI-Expo-Evolving-to-emotionally-intelligent-applications.jpg

[vc_row][vc_column][vc_separator color=”black”][vc_column_text]Speaking at AI Expo in Amsterdam, BPU Holdings CTO Carlos Art Nevarez believes it’s time for machines to become emotionally intelligent.

Machines are becoming increasingly smart thanks to artificial intelligence, but they still remain cold, logical, and lacking emotion. Worse still, they have a bias problem.

“We are teaching the machine to synthetically emulate emotional intelligence to better relate to how you and I feel,” states Nevarez.

“So many exciting applications present themselves to enhance healthcare analytics, market assessment, consumer and voter sentiment, and delivering customised content in the Internet of Things.”

Nevarez recounted a time he was out with his son and they saw a person fall. His son laughed, but – when Nevarez explained the person could be hurt – his son became empathetic. Over time, his son recognised when to show empathy.

AI learns from patterns, and Nevarez believes – much like his son – they can be taught with empathetic values.

“Teaching a machine to feel, is just as important as teaching a machine to think,” says Nevarez. “Or we end up with a world heavily-biased by the engineers that program those AIs.”

EMOTIONAL ANALYSIS

The company started with political forecasting. Politics, as we all know, is very much driven by sentiment and ideologies.

In the past couple of years, there’s been some major elections and decisions not many saw coming.

Navarez applied BPU’s emotional computing engine to the US Presidential Elections for a personal project. Based on its analysis, Donald Trump was going to win.

“I wanted to see if our emotional computing engine would come close to predicting the outcome of the election,” recalls Navarez. “After watching the election for about eight weeks, and not really getting a ton of data – because it was just me and I didn’t have the computing resources of the company – I came up with the prediction that Donald Trump was going to win.”

“We called our engineers and said there’s something wrong with our algorithms, it’s predicting Trump is going to win.”

A week later, Donald Trump won.

There is very little polling done via telephone anymore, it’s mostly online. This has increased accuracy as people are more honest online.

“People tend to be more honest when they’re flaming someone on Twitter,” comments Nevarez. “When people ask [in person] ‘How do you feel about this candidate?’ then people want to be nice, they don’t want to hurt someone’s feelings.”

For a customer this time, BPU attempted to predict the Korean elections.

Twitter is less used in South Korea and Nevarez was sceptical it would be accurate. However, yet again, it was able to correctly predict the outcome leading to the election of President Moon Jae-In.

BPU even released its results on the morning of the election, days before traditional pollsters. The worst margin of error was just two percent.

For a final example, BPU showed how it correctly ranked the results of the Nevada US House District Republican Primary election.

 

The examples prove BPU’s sentiment analysis works. However, it’s understanding an individual’s emotions and helping to alter them (for the better) which could have the greatest impact.

Seth Grimes, Principal Analyst for Alta Plana specialising in natural language processing (NLP), text analytics, and sentiment analysis, states: “Automated emotion understanding — emotion AI — is now a must-have capability for consumer marketing and public-facing campaigns, including electoral campaigns…”

BUILDING EMOTIONAL APPS

AI will, and is, revolutionising healthcare. It’s also one of the areas where empathy is most needed.

aiMei is an app created by BPU which offers personality tests and mood analysis in a chatbot-like interface. The company made a version of it for the medical industry where a physician can train it for a patient’s needs.

The app could ask whether a person has taken their ibuprofen yet, for example. Being a tablet known for causing stomach irritation when taken on an empty stomach, it could ask whether the individual has eaten yet or not.

A bit later it could ask if the person wants a snack, pre-programmed with those available that day, so a nurse doesn’t have to go and check with each patient.

Finally, the patient may be asked to provide their mood – or how they’re feeling – an hour or so after, to know whether the medication is working or not.

Physicians have reportedly said to BPU that, while they can go monitor things like temperature, they’re unable to keep a record of a patient’s mental wellbeing – but they’d like to.

Salim Hariri, Ph.D., co-director of the Natural Science Foundation and The Center for Autonomic Computing at the University of Arizona, recently stated: “Among many other applications, BPU’s AEI technology shows great potential for healthcare advances in patient emotional and critical assessment.”

The company also has a smartwatch app which can detect when a person’s heart rate is accelerating and look for the potential reason.

By collaborating with a heart surgeon, the smartwatch app is able to accurately predict five minutes before a cardiac arrest is going to occur. This means health professionals can be alerted earlier to begin preparations which could be life-saving.

Outside healthcare, BPU has also produced a personalised news app called Neil which determines a user’s individual emotional reaction to articles in order to serve up more or less of similar coverage.

All of these apps, Nevarez says, is providing the company with a good look at the human emotional genome and helping it to create frameworks that help anyone build empathetic apps.[/vc_column_text][/vc_column][/vc_row]


Consumers-want-businesses-to-have-more-human-like-AI.jpg

Research has found most consumers have interacted with AI and would prioritise businesses with human-like implementations.

The research, from Capgemini’s Digital Transformation Institute, found close to three-quarters (73 percent) of consumers have interacted via AI.

Satisfaction with those who have experienced AI interactions is slightly lower, at 69 percent. Over two-thirds satisfaction is quite surprisingly high, especially when you consider how dissatisfied people typically are with traditional automated systems.

Just over half (55%) of consumers across all age groups want interactions to be a mix of AI and humans, while 64 percent want AIs to be more ‘human-like’ rather than ‘human-looking’.

Interestingly, the fear surrounding AI intellect – likely instilled through sci-fi movies such as Terminator – appears to be decreasing. More than three in five consumers (62%) are now comfortable with an AI featuring human-like intellect.

Where an AI has the desired human-like qualities, almost half (48%) say they feel more goodwill towards a company and would have a greater propensity to spend.

One major benefit the majority (63%) of people felt AI had over humans was its 24/7 availability and how it provides greater control over their interactions.

Back in May, Google showed a demo of its impressive-yet-controversial ‘Duplex’ (now renamed to ‘Duet’) AI system which could make calls on a person’s behalf while sounding like a human. It was shown booking a hair salon appointment with the other person completely unaware they were speaking to an AI.

48 percent of respondents say the opportunity to be able to delegate tasks to an electronic personal assistant is exciting, with another 46 percent believing it will enhance their quality of life.

Despite consumers wanting AI to have human attributes like similar voice (62%) and the ability to understand emotions (57%), they also find them “creepy” and do not want them to appear human.

More than half (52%) of customers are not comfortable when AI is set up to look like a person. The report also finds that two-thirds of consumers (66%) would like to be made aware when companies are enabling interactions via AI.

Some companies, such as Google and Microsoft, have committed themselves to ensuring their AIs identify themselves as such to a human at the beginning of an interaction. Lawmakers are considering legislation to make this a legal obligation.

The research surveyed 10,000 consumers and more than 500 executives at leading organisations across 10 global markets.


UK-and-France-announce-measures-to-strengthen-AI-ties.jpg

[vc_row][vc_column][vc_separator color=”black”][vc_column_text]UK Digital Secretary Matt Hancock is visiting France today where he’s set to announce measures for strengthening AI cooperation between the nations.

Speaking to industry leaders around the world, many recognise the UK as a leader in AI research and talent from its class-leading universities. Since 2014, an AI startup has launched every five days on average in the UK.

This strength has resulted in significant interest from global technology giants – including Google’s £400 million acquisition of DeepMind, and Facebook’s acquisition of Bloomsbury AI just earlier this week.

Digital Secretary Matt Hancock said:

“The UK is a digital dynamo, increasingly recognised across the world as a place where ingenuity and innovation can flourish. We are home to four in ten of Europe’s tech businesses worth more than $1 billion and London is the AI capital of Europe.

France is also doing great work in this area, and these new partnerships show the strength and depth of our respective tech industries and are the first stage in us developing a closer working relationship. This will help us better serve our citizens and provide a boost for our digital economies.”

France and the UK are looking to forge closer links between leading AI companies in each of the nations.

Speaking alongside his French counterpart, Mounir Mahjoubi, Mr Hancock will confirm the UK’s world-leading centre for AI and data – The Alan Turing Institute – is signing an agreement with the French Institute – DATAIA– to promote collaboration between the French and British sectors.

Alan Wilson, CEO of The Alan Turing Institute, said:

“The fundamental goal behind all our research is to build a data and AI enriched world for the benefit of all. In order to do this, it is critical to forge international collaborations and share our knowledge, expertise and ideas with other research centres around the world.

The Institute and DATAIA both share a vision for building research in data science and AI which crosses disciplinary boundaries and recognises the societal implications of data and algorithms. It is a pleasure to kickstart this engagement and we look forward to working with them to advance UK and French excellence in this area.”

Another area where the UK is seen as leading in AI is discussions surrounding its ethical development. The Alan Turing Institute and DATAIA will also collaborate in this area to promote fairness and transparency in the design and implementation of algorithms.

Researchers from each institute will spend time at each other’s facilities and host joint workshops.

Mr Hancock and Mr Majoubi will also sign an accord on digital government. This will commit to extending their cooperation in the digital sector – on innovation, artificial intelligence, data and digital administration.

Finally, Mr Hancock will confirm the expansion of London-based Entrepreneur First (EF) to Paris. EF aims to further help founders build leading new companies and promote ties between the nations.

The announcements are made amid the backdrop of Brexit, so it’s good to see both nations continuing to forge alliances for shared future success and prosperity.[/vc_column_text][/vc_column][/vc_row]


Scientists-pledge-not-to-build-AIs-which-kill-without-oversight.jpg

[vc_row][vc_column][vc_separator color=”black”][vc_column_text]Thousands of scientists have signed a pledge not to have any role in building AIs which have the ability to kill without human oversight.

When many think of AI, they at least give some passing thought of rogue AIs seen in sci-fi movies such as the infamous Skynet in Terminator.

In an ideal world, AI would never be used in any military capacity. However, it was almost certainly be developed one way or another because of the advantage it would provide to an adversary without similar capabilities.

Russian President Vladimir Putin, when asked his thoughts on AI, recently said: “Whoever becomes the leader in this sphere will become the ruler of the world.”

Putin’s words sparked fears of a race in AI development similar to that of the nuclear arms race, and one which could be potentially reckless.

Rather than attempting to stop military AI development, a more attainable goal is to at least ensure any AI decision to kill is subject to human oversight.

Demis Hassabis at Google DeepMind and Elon Musk from SpaceX are among the more than 2,400 scientists who signed the pledge not to develop AI or robots which kill without human oversight.

The pledge was created by The Future of Life Institute and calls on governments to agree on laws and regulations that stigmatise and effectively ban the development of killer robots.

“We the undersigned agree that the decision to take a human life should never be delegated to a machine,” the pledge reads. It goes on to warn “lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.”

PROGRAMMING HUMANITY

Human compassion is difficult to program, we’re certainly many years away from being able to do so. However, it’s vital when it comes to life-or-death matters.

Consider a missile defense AI set up to protect a nation. Based on pure logic, it may determine that wiping out another nation which begins a missile program is the best way to protect its own. Humans would take into account these are people’s lives and seeking alternatives such as diplomatic resolutions should be sought.

Robots may one day be used for policing to reduce the risk to human officers. They could be armed, with firearms or tasers, but the responsibility to fire should always come down to a human operator.

Although it will undoubtedly improve with time, AI has been proven to have a serious bias problem. A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

An armed robot who mistakenly identifies someone for another person could end up killing that individual simply due to a flaw with its algorithms. Confirming the AI’s assessment with a human operator may be enough to prevent such a disaster.

[/vc_column_text][/vc_column][/vc_row]


AI-robots-will-solve-underwater-infrastructure-damage-checks.jpg

[vc_row][vc_column][vc_separator color=”black”][vc_column_text]Robots will be paired with a versatile AI that can quickly adapt to unpredictable conditions when examining underwater infrastructure.

Some of a nation’s most vital infrastructure hides beneath the water. The difficulty in accessing most of it, however, makes important damage checks infrequent.

Sending humans down requires significant training and can take several weeks to recover due to the often extreme depths. There are far more underwater structures than skilled divers to inspect them.

Robots have been designed to carry out some of these dangerous tasks. The problem is until now they’ve lacked the smarts to deal with the unpredictable and rapidly-changing nature of underwater conditions.

Researchers from Stevens Institute of Technology are working on algorithms which enable these underwater robots to check and protect infrastructure.

Their work is led by Brendan Englot, Professor of Mechanical Engineering at Stevens.

“There are so many difficult disturbances pushing the robot around, and there is often very poor visibility, making it hard to give a vehicle underwater the same situational awareness that a person would have just walking around on the ground or being up in the air,” says Englot.

Englot and his team are using reinforcement learning for training algorithms. Rather than use an exact mathematical model, the robot performs actions and observes whether it helps to attain its goal.

Through a case of trial-and-error, the algorithm is updated with the collected data to figure out the best ways to deal with changing underwater conditions. This will enable the robot to successfully manoeuvre and navigate even in previously unmapped areas.

A robot was recently sent on a mission to map a pier in Manhattan.

“We didn’t have a prior model of that pier,” says Englot. “We were able to just send our robot down and it was able to come back and successfully locate itself throughout the whole mission.”

The robots use sonar for data, widely regarded as the most reliable for undersea navigation. It works similar to a dolphin’s echolocation by measuring how long it takes for high-frequency chirps to bounce off nearby structures.

A pitfall with this approach is you’re only going to be able to receive imagery similar to a grayscale medical ultrasound. Englot and his team believe that once a structure has been mapped out, a second pass by the robot could use a camera for a high-resolution image of critical areas.

For now, it’s early days but Englot’s project is an example of how AI is enabling a new era for robotics that improves efficiency while reducing the risks to humans.[/vc_column_text][/vc_column][/vc_row]


Apple-aims-to-simplify-AI-models-with-CreateML-and-Core-ML-2.jpg

[vc_row][vc_column][vc_separator color=”black”][vc_column_text]During its annual WWDC event, Apple announced the launch of its CreateML tool alongside the sequel of its Core ML framework.

CreateML aims to simplify the creation of AI models. In fact, because it’s built in Swift, it’s possible to use drag-and-drop programming interfaces like Xcode Playgrounds to train models.

Core ML, Apple’s machine learning framework, was first introduced at WWDC last year. This year, the company has focused on making it leaner and meaner.

Apple claims Core ML 2 is 30 percent faster using a technique called batch prediction. Quantization has enabled the framework to shrink models by up to 75 percent.

This is how Apple describes Core ML:

“Core ML lets you integrate a broad variety of machine learning model types into your app. In addition to supporting extensive deep learning with over 30 layer types, it also supports standard models such as tree ensembles, SVMs, and generalised linear models.

Because it’s built on top of low-level technologies like Metal and Accelerate, Core ML seamlessly takes advantage of the CPU and GPU to provide maximum performance and efficiency.

You can run machine learning models on the device so data doesn’t need to leave the device to be analysed.”

An effort is evidently made to iterate that no information leaves the device as people become ever more wary about how their data is collected and used.

Google launched ML Kit last month at its I/O developer conference. Most of its abilities can run offline but are more limited than when connected to Google’s cloud. For example, the on-device version of the API could detect a dog is in a photo – but when connected to the internet – it could recognise the specific breed.

Apple says the developers of Memrise, a language-learning app, previously took 24 hours to train a model using 20,000 images. CreateML and Core ML 2 reduced it to 48 minutes on a MacBook Pro and 18 minutes on an iMac Pro. Furthermore, the size of the model was reduced from 90MB to just 3MB.

For developers who like Core ML, but use TensorFlow, Google released a tool in December 2017 which converts AI models into a compatible file type. You can find it on the now Microsoft-owned GitHub here.[/vc_column_text][/vc_column][/vc_row]


China-plans-new-era-of-sea-power-with-unmanned-AI-submarines.jpg

[vc_row][vc_column][vc_separator color=”black”][vc_column_text]China is planning to upgrade its naval power with unmanned AI submarines that aim to provide an edge over the fleets of their global counterparts.

A report by the South China Post on Sunday revealed Beijing’s plans to build the automated subs by the early 2020s in response to unmanned weapons being developed in the US.

The subs will be able to patrol areas in the South China Sea and Pacific Ocean that are home to disputed military bases.

While the expected cost of the submarines has not been disclosed, they’re likely to be cheaper than conventional submarines as they do not require life-supporting apparatus for humans. However, without a human crew, they’ll also need to be resilient enough to be at sea without onboard repairs possible.

The XLUUVs (Extra-Large Unmanned Underwater Vehicles) are much bigger than current underwater vehicles, will be able to dock as any other conventional submarine, and will carry a large amount of weaponry and equipment.

As a last resort, they could be used in automated ‘suicide’ attacks that scuttle the vessel but causes damage to an enemy’s ship that may or not be manned.

“The AI has no soul. It is perfect for this kind of job,” said Lin Yang, Chief Scientist on the project. “[An AI sub] can be instructed to take down a nuclear-powered submarine or other high-value targets. It can even perform a kamikaze strike.”

The AI element of the submarines will need to carry out many tasks including navigating often unpredictable waters, following patrol routes, identifying friendly or hostile ships, and making appropriate decisions.

It’s the decision-making that will cause the most concern as the AI is being designed not to seek input during the course of a mission.

The international norm being promoted by AI researchers is that any weaponised AI system will require human input to ultimately make a decision. Any news that China is following a policy of creating weaponised AIs that do not require human input should be of global concern.[/vc_column_text][/vc_column][/vc_row]


Google-promises-its-call-center-AI-is-not-designed-to-replace-humans.jpg

[vc_row][vc_column][vc_separator color=”black”][vc_column_text]Not content with its impressive(ly creepy) Duplex demo, Google promises it’s not wanting to replace call centers with its latest AI demonstration.

During the Google Cloud Next 18 conference, Google Chief AI Scientist Dr. Fei-Fei Li demonstrated a new AI system called Google Contact Center AI which – much like Duplex – sounded incredibly natural in its responses to human queries.

Google seems to have learned its lesson from its Duplex demonstration and wanted to iterate that it’s not designed to replace human operators.

Instead, the system could be used to replace the current dreaded automated messages you often hear when dialling a call center. “Press 1 for… press 2 for…” just writing it makes me shudder.

Contact Center AI could be able to answer some basic questions about the business to provide answers quickly to callers and reduce the waiting times for those needing a human operator.

When a human operator is necessary, the system could automatically transfer the caller to the correct department.

“Contact Center AI is an example for our passion for bringing AI to every industry all the while elevating the role of human talent,” Li told the audience. “We’re creating technology that’s not just powerful but that’s also trustworthy.”

Call centers are something we’ve all had a frustrating experience with, so it’s a good example of the practical uses of AI when explaining to a wider public that may be sceptical about the technology’s benefits.

Google took the right approach when demonstrating Contact Center AI. The company’s Duplex demo spurred a needed debate about whether an AI should always identify itself as such when interacting with a human.

Several companies, including Google, promise their AIs will do so. Some lawmakers are considering legislation that will make it a legal obligation.[/vc_column_text][/vc_column][/vc_row]