How Far Are We From A Real ‘Westworld’?

If you’re a fan of HBO’s Westworld, then you’re probably wondering how something so advanced could become a reality. How far away are we from travelling to these places that blur the line between play and reality? Could robots reach a level of consciousness? Can they dream? Most importantly, would we be safe?

David Eagleman, a neuroscientist at Stanford University in Palo Alto, CA, and scientific adviser to the Westworld writing staff during season one, took a stab at explaining how likely some of these things are to actually happen.

How did you get involved in the show?

Eagleman: I was talking with one of the writers, and I asked who their scientific adviser was. Turns out, they didn’t have one. So that’s how I got on board. Then I went to [Los Angeles, California,] and had a long session with the producers and writers, for about 6 hours, maybe 8, about free will and the possibility of robot consciousness.

I also showed them some tech that I’d invented. I gave a TED talk a few years ago on this vest with vibratory motors on it. That’s now part of the season two plot. I can’t tell you anything about it. The real vest vibrates in response to sound, for deaf people, but in Westworld it serves a different purpose, giving the wearers an important data stream.

What else did you talk about?

Eagleman: What is special, if anything, about the human brain, and whether we might come to replicate its important features on another substrate to make a conscious robot. The answer to that of course is not known. Generally, the issue is that all Mother Nature had to work with were cells, such as neurons. But once we understand the neural code, there may be no reason that we can’t build it out of better substrates so that it’s accomplishing the same algorithms but in a much simpler way. This is one of the questions addressed this season.

Here’s an analogy: We wanted to fly like birds for centuries, and so everybody started by building devices that flapped wings. But eventually we figured out the principles of flight, and that enabled us to build fixed-wing aircraft that can fly much farther and faster than birds. Possibly we’ll be able to build better brains on our modern computational substrates.

Has anything on the show made you think differently about intelligence?

Eagleman: The show forces me to consider what level of intelligence would be required to make us believe that an android is conscious. As humans we’re very ready to anthropomorphize anything. Consider the latest episode, in which the androids at the party so easily fool the person into thinking they are humans, simply because they play the piano a certain way, or take off their glasses to wipe them, or give a funny facial expression. Once robots pass the Turing test, we’ll probably recognize that we’re just not that hard to fool.

Can we make androids behave like humans, but without the selfishness and violence that appears in Westworld and other works of science fiction?

Eagleman: I certainly think so. I would hate to be wrong about this, but so much of human behavior has to do with evolutionary constraints. Things like competition for survival and for mating and for eating. This shapes every bit of our psychology. And so androids, not possessing that history, would certainly show up with a very different psychology. It would be more of an acting job—they wouldn’t necessarily have the same kind of emotions as us, if they had them period. And this is tied into the question of whether they would even have any consciousness—any internal experience—at all.

Are there any moments of especially humanlike behavior in the show?

Eagleman: In my book ‘Incognito’, I describe the brain as a team of rivals, by which I mean you have all these competing neural networks that want different things. If I offer you strawberry ice cream, part of your brain wants to eat it, part of your brain says, “Don’t eat it, you’ll get fat,” and so on.

We’re machines built of many different voices, and this is what makes humans interesting and nuanced and complex. In the writers’ room, I pointed out that one of the [android] hosts, Maeve, in the final episode of season one, finally gets on a train to escape Westworld, and she decides she’s going back in to find her daughter. She’s torn, she’s conflicted. If the androids had a single internal voice, they’d be missing much of the emotional coloration that humans have, such as regret and uncertainty and so on. Also, it just wouldn’t be very interesting to watch them.

According to Eagleman, Westworld isn’t a perfect depiction of what life could be like – at least not anytime soon. Most of what we see on the show is for entertainment, but that’s not to say we can’t reach that level of interaction with artificial intelligence someday.

Artificial Intelligence Can Prophesy When You’ll Die

I was born in the 1980’s and my belief is that I will live to be 120 years old, so anybody planning to attend my burial ceremony will have to be around come 2105 – end of my prophecy in that. Regarding the exact date, I can’t tell, but you know what, artificial intelligence promises it would.

What’s the real story? According to AI researchers, machine intelligence has proved capable of foretelling when exactly a patient is going to die.

Okay, there is no denying that doctors have, and continue to issue such like predictions (which usually go deep to explain why the patient has a limited time alive). For example, based on the stage of let’s say cancer or HIV and AIDs the victim is in, physicians can say he or she is left with (X) weeks/years to kiss the world goodbye (off cause with due respect to life in general).

But How Correct Does this Turn?

Source: patientworthy

While the doctor’s prediction is not something to ignore, in often cases, the time bracket given turns out inaccurate. And that again often comes with unwanted consequences. Either the victim becomes uncertain of what to do with their life or they become ignorant, stressed and hopeless, that they lead a useless life.

What’s the point here, as in, while there is a great need to tell the victim how long they should expect to live, there is also that necessity to be accurate with the dates or month. So as not to subject the concerned into planning for a death that “may” not happen as foretold, the benefits of that will be; it will allow the person to explore other methods of treatment – something like maybe using unapproved procedures i.e. killing the cancer using robotics agents.

AI’s New Role is Predicting the Day of Death

Source: fwthinking

So far, everybody agrees that death is inevitable and that’s why it’s a serious topic to humanity. Taking the subject with the same weight, scientists at the School of Medicine Stanford University have developed and announced a new AI-powered software that foretells when patients could die, with an astonishingly high level of accuracy.

The researchers went through thousands of patient records, analyzed, and extracted data to create an algorithm that would predict when exactly a victim is likely to succumb to death. At current, the model is set within a time bracket of three and twelve months, purposely to narrow playtime as much as possible.

Is the whole concept logically explainable? Definitely yes, ideally, all it takes is simply using archived patient records, to set up and train machine learning, then testing it against the dates given by the doctors as well as the exact date of the victim’s departure.

Reliability and Accuracy of the Model

Source: forbes

As reported by Lloyd Minor, dean of students at SU’s School of Medicine, in their published work appearing on The Wall Street Journal, the system scored an average of 9 out of 10 at all times, in revealing when the victim would breathe his or her last.

“The system is not perfect yet, but it’s still a new invention –destined to improve with time,” said Minor. However, the dean was quick to address that the AI will not replace physicians, but will be a tool in the hands of doctors.

Just to point out, medical care continues even after it’s been established that the patient will not survive, and as such, this system will help physicians to create better end-of-life medical care for the victims.

Will This Discourage Care?

According to Minor and his team, their argument is that this information will not limit care, “On the least it will empower decision making on all parties, like having to spend more time with those they love, encouraging those they are to soon leave behind and so on. The concerned will also know that they need to live in a certain way to avoid fatigue, discomfort or severe pain.

On the other hand, while it may sound unacceptable to society’s belief, the information can also be used to evade financial stress. As in, besides not seeking farther treatment the patient can as well travel, earlier to where they wish to be rested after departure. That’s not all, there is also that concept of writing your will and clearly explaining who takes what after you are no more.

In summary, the whole concept revolves around creating the best environment to help the patient live happier, more satisfied and with much confidence, in their last days on earth. Which in a way, is a great solution availed by artificial intelligence, and fulfills the tech’s mission (of solving society’s problems).

Artificial Intelligence Now Evaluates Cell Therapy Functionality

To be precise, artificial intelligence came into the medical arena to help us do tasks either faster or more accurate – but apart from that, the tech is now assisting us to accomplish tasks that were originally second to impossible. Before this new venture (where now researchers are using AI to evaluate cell therapy functionality,) experts tried to use machine intelligence to kill cancerous cells, using tinny robots.

As in, although not approved for human use, a team of scientists managed to inject tinny autonomous agents into the bloodstream of a living specimen to go shrink cancerous cells, by blocking their blood supply.

Looking at that, and other clinical findings that try to focus on the cell, to try to create the precision treatment for different patients, we come to realize how the cell, as an organ is becoming a key focus in the medicine world, in therapy creation.

Source: internapcdn

Classifying Cells Using Artificial Intelligence

The deeper we go into certain medical studies the more the need to want to know about the cells — how they can be altered to help pass an intended signal to the immunity system to make it fight pathogens, and so on. But, while that happens to be a great approach for developing customized treatment, there has been this one need, to evaluate the effectiveness of the induced cells or the injected pluripotent stem cell in this case.

In that line, researchers have built a complete automated AI-driven multispectral imaging system that can classify the potency of stem cells – in treating (AMD) or age-related muscular degeneration. That is, using cells derived from retinal pigment epithelial tissues (iPSC-RPE). The work was reported at the 2018 ARVO conference.

What’s in the New Software

Source: cdn

For years, medics have been trying to develop anti-aging solutions through drug research and all other viable methods, but to be fair there has not been any practical success that can be applied. Nonetheless, doctors claim and still insist that indeed it is possible to slow aging, and in the long run even reverse it because the whole thing revolves around reprogramming the DNA in cells.

Hopefully, this development is pointing us to that. Well, the new algorithm deploys a method called CNN (convolutional neural network). A deep learning technology to analyze, evaluate and cluster categorically, the iPSC-RPE cell therapy, with reference to standard, cost-based fashion and reproducible. All without human involvement, something that for long has been regarded as a sensitive physiological and molecular assay left to experts alone.

“Our model can classify cells with high accuracy by just analyzing absorbance images,” explained Nathan Hotaling and his fellows from the National Institute of Health, in the report. “Yes, the concept deploys multispectral absorbance imaging as the major technique, but it has proven to be extremely fast, non-invasive, robust, and top of all, it is fully autonomous, in gauging induced PSC maturity, identifying and rating functionality,” he added.

Artificial Intelligence Channels its Way into Medicine

Initially, it was all debate about whether truly machine intelligence will one day be in charge of medical diagnostics, but as things are turning now, we are soon going to have a pure robot as the major consultant in therapeutic topics. Off cause, it’s obvious that the public will take sometimes before trusting agents, but looking from another perspective, there will be that section of people who will prefer machines as their main “doctor” over humans.

Source: usnews

Well, that aside, AI is already gaining grounds with the medical community and this is reflected in the way the FDA has started clearing –for human use, AI-powered medical gadgets. With the first device to be officially given clearance being a tool for the detection of diabetic retinopathy, christened the IDx-DR.

IDx-DR is praised for being super-fast and accurate, where it uses algorithmic power to analyze images from a retinal camera to offer diagnosis services within half an hour, something that does take quite long with the conventional methods.

Test Score: In the finding, the developed CNN together with ML were used to further compare the absorbance images to establish characteristics while grading iRPE in clinical assessment, and the system demonstrated a 97 percent accuracy in sensitivity — in the average of all correct and recommended criterions.

In short, we are entering a time when diagnostics will be done by machines autonomously. In addition, AI is also reducing the waiting time that doctors needed to design therapy suggestions.

Artificial Intelligence Helping Open Up The Vatican’s Secret Archives

If you’ve heard of the Vatican Secret Archives, you know that it’s one of the most grandiose collections of historical data in the world. To this point, it’s also been completely useless to those who have tried to understand the majority of what’s in it.

The VSA is located within the walls of the Vatican, just north of the Sistine Chapel and right next to the Apostolic Library. It houses 53 linear miles of shelving dating back more than 12 centuries. This includes please from Mary Queen of Scots to Pope Sixtus V just before she was executed, the papal bull that excommunicated Martin Luther, and much more. There’s nothing like the VSA in the entire world.

Source: Serial Box

Of all the information available in the VSA, only a few pages have been made available online to researchers and students. Not very many pages of text have been scanned and made searchable, making it very difficult to find much within the vast amount of information behind the walls of the Vatican. If what you’re looking for isn’t available with the basic search, you must apply for access and go through the archives yourself to find what you need. Even then, there’s no guarantee.

In Condice Ratio Will Make A Difference

A new project called In Condice Ratio is marrying artificial intelligence with optical-character-recognition (OCR) to scan through all of this information that has yet to be sorted through and uploaded to the online database.

OCR is used to scan books, images, and other printed material in order to transform it into machine-encoded text. It is a common method of digitizing printed texts so that they can be electronically edited, searched, stored more compactly, displayed on-line, and used in machine processes such as cognitive computing, machine translation, text-to-speech, key data and text mining.

The traditional OCR method is great for typeset text, but doesn’t work so well with handwritten documents, which make up the majority of the information in the VSA. Since OCR works by reading the spaces in between text, known as dirty segmentation, reading text from centuries ago that look like a mix between calligraphy and cursive can be rather difficult to accomplish. OCR can’t tell where one letter stops and another starts, and therefore doesn’t know how many letters there are.

Source: In Condice Ratio

By using AI to enhance OCR and create what is known as jigsaw segmentation, the tool is able to identify separate pen strokes rather than relying on identifying whole letters or words. The four scientists behind the project – Paolo Merialdo, Donatella Firmani, and Elena Nieddu at Roma Tre University, and Marco Maiorino at the VSA – explain how the advanced process works. In jigsaw segmentation, OCR looks at the thinner strokes, making it easier to analyze them, then carves out letters using the joints of the strokes, thus creating what looks like jigsaw pieces. These pieces are then scanned and turned into searchable data, which in this case is uploaded to an online database.

The VSA isn’t its only archive the project has its sights on, either. If it can successfully grant more access to the VSA, In Condice Ratio also plans to tackle other large archives around the world.

Putting OCR to the Test

To give the refined OCR an opportunity to show its true potential, the team had it scour through documents from the Vatican Register. The 18,000-page batch of documents includes letters to European kings, rulings on legal matters, and other correspondence between rulers and religious leaders from centuries ago.

The software received a 96% success rate after it finished reading through the documents. The most common mistakes made were with letters ‘m’, ‘n’, and ‘i’, as well as cofusing the archaic ‘f’ with ‘s’. While it would be ideal to have 100% accuracy, “imperfect transcriptions can provide enough information and context about the manuscript at hand” to be useful, says Paolo Merialdo.

Source: In Condice Ratio

The team claims that, like any form of AI, the process will improve itself over time. As it teaches itself to detect more distinct features between these letters, the results will become much more accurate. In Condice Ratio also plans to implement its strategy – jigsaw segmentation mixed with crowdsourced training of the software – to other projects and languages.

Although this is expected to make great progress in this type of research, Rega Wood, a historian of philosophy and paleographer (expert on ancient handwriting) at Indiana University, claims even artificial intelligence will always have some kind of limitation. It “will be problematic for manuscripts that are not professionally written but copied by nonprofessionals,” she says. The larger amount of information it has access to, the less accurate it will be as well. In some cases, says Wood, “it is not only more accurate, but just as quick to make transcriptions without such technology.”

Artificial Intelligence to Help Separate Fact From Fiction

They say, “The truth sets you free.” while this saying is not common in the tech world, being a problem solver, artificial intelligence is being looked upon by humanity to help in telling what’s real from what’s fake. Or, separating fact from fiction, in this case.

We’ve seen couples pick up fights after being sent photos of their spouse, found in a compromising situation. And you know what, “Photoshop” or technology always turns out to be the culprit, even when the whole thing is true.

Well, at times, the allegation is so fake that you can easily tell the image is not original, but as technology continues to advance separating original from Photoshopped images is becoming harder. So who becomes the judge here? Fingers are pointing to artificial intelligence and its sister technologies.

But How Does Fact and Fiction Connect With AI?

Source: electronicbeats

In many ways, for one – Fiction, which is the keyword here, originates from some work of art. As in, someone with a creative mind, borrowing from the real world, creates something that resembles what is original.

Fiction can exist in multiple forms and shape which include a hand drawing, edited-photos, a story, a movie and so on, but most times, it creeps into our general day-to-day conversation.

A simple example here can be, when trying to please someone, we tend to say what they only want to hear other than basing the conversation on facts. Luckily, artificial intelligence happens to function like our brains — the major organ behind everything fiction. Meaning it might help out when we need to separate fakeness from the truth in talks.

As a leader, of cause you want to know everything about the team behind you, but unfortunately even your most trusted personal assistant will want to tell you what you want to here, hiding facts that could have otherwise be more resourceful in the subject matter.

Artificial Intelligence for Fact-Based Leadership Training

Source: krungsrifinnovate

McKinsey’s research, a while ago was pointing us to how AI can be used to train better leaders, helping us to predict aspirants with better leadership skills. For the most part, the whole concept was linked to how individuals behave on social networks –by analyzing how they treat and collaborate with their contacts, what they read most about and so on.

In other words, machine intelligence can follow up with profiles to establish people’s character or, force aspirant to invest in discipline and reading leadership material else they risk not being elected.

Reading the Online Weather

From another perspective, McKinsey’s work explains that using social network analytics the leader already in office would be able to know what exactly is required. For example, the report quotes how after dissecting 100 plus SNA’s (Social Network Analysis) from the sociogram of organizations, of different sizes, the experts confirmed that it’s easy to tell most of what is going in the workplaces.

The subject is broad and understanding it may depend on one’s area of interest, like for instance, already some researchers are working with AI to help separate authentic news from fake. As in, hopes are that artificial intelligence remains the only tool as of now, that may assist us to solve the need of separating fact from fiction.

Nvidia’s New Deep Learning Method Could Make Photoshop Obsolete

Many people like using Photoshop to edit their pictures considering how good the program is.  However, would fans of the program jump ship if there was another way to edit and reconstruct images with scary accuracy?  Well, according to Nvidia’s artificial intelligence research team, they have developed a deep learning method that could effectively make Photoshop become obsolete.


What is Deep Learning?

Writer Wajeeh Maaz has reported that Nvidia has come up with a way not only to compete with the popular photo editing program but developed a way to take an image, edit and reconstruct it with such accuracy that could make Photoshop become a thing of the past.  It was Nvidia’s research team that focuses on artificial intelligence that came up with a way to take a corrupted image and then edit or reconstruct it.  This new application utilizes what is known as deep learning to accomplish this goal; however, what exactly is deep learning?

Well, deep learning is a function of artificial intelligence which imitates the human brain’s workings in creating patterns and processing data that is used for making a decision.  The software can take parts of an image that has been modified or deleted to be replaced or in-painted with digital reconstructions that are based from the uncorrupted image that remains; the final products are scarily accurate.


Nvidia Releases Published Whitepaper to Describe Their Product

Nvidia released a video that showed a variety of objects, from gigantic rocks to door panes and pillars, being erased entirely from images seamlessly that had what was left of the image to be intact.  Nvidia stated in a published whitepaper that this blending and natural look with the image that remains was an important element of what they desired to achieved.

The whitepaper said: “Previous deep learning approaches have focused on rectangular regions located around the center of the image, and often rely on expensive post-processing. The goal of this work is to propose a model for image in-painting that operates robustly on irregular hole patterns, and produces semantically meaningful predictions that incorporate smoothly with the rest of the image without the need for any additional post-processing or blending operation.”

However, there is more to recognize than developing a way to reconstruct corrupted images into basically accurate ones.  According to the project researchers, they consider themselves to be the first that trained successfully a neural network; this is a form of computing architecture somewhat modeled to represent the brain of a human so that it can process holes that are shaped irregular in images. 


Therefore, this would remove for graphic designers the need of working for hours to achieve natural, perfect results utilizing masking layers in tools like the Photoshop program.  This project is a proof-of-a-concept, just like was seen with Nvidia’s AI when it changed dogs into cats. 

Time will tell as far as how popular this concept will become and if one day Photoshop users would leave this photo editing tool behind for a product that Nvidia puts out which uses this deep learning method.  Those wishing to see Nvidia’s video that demonstrates this can see it here.

5 Companies Fueling AI Development

It’s now clear that no amount of criticism can stop AI from becoming the future doctor, judge, or a major component in the future car, airplane or submarine. That is, the market for this technology is increasing in lips and bounds. Not to mention that over 70% of major companies, Facebook and Google included now depend on machine intelligence to make a profit. In short, AI is the future of business.

The tech is hotcake and it’s a smart move that startups venture in – because the field is still green with inexhaustible opportunities. However, anyone planning to try out their luck with artificial intelligence must know there are big fish companies already in the waters – and it’s good to know where they are focusing on, to know what works.

Top Companies Pushing AI Progress

Source: agilient


Well, it would not have been fair not to mention the big G or Google as top in the list of companies that drive AI. Besides having invested heavily on the tech, Google is already reaping the benefits of machine intelligence through its SE service, which dominates 74% of the internet search.

Google’s major focus has been developing brain simulated neural networks, that can as well be served online as a service. In its endeavors, the firm has invented solutions like the Google Assistant, HDR+photography and more. The big G has also shared its technology with the public in the quest to promote independent developers – by allowing access to its AI platforms such as the TensorFlow, and AutoML.


Just to make things clear, DeepMind is Google’s innovation but the company operates and stands completely on its own. And its main occupation is conducting research work, not profit making. In achievement, DeepMind is the firm behind Alpha Go a model that the world’s Go player can’t beat, besides deep research in the voice recognition technology.


When IBM is mentioned, most of us connect it with the awaited 50 (plus) Cubit Quantum Computer that might come to redefine the current PC. That’s not all, IBM is driving other techs, and Watson, made of algorithms that are helpful in the medical arena, and natural language processing is one of them. Ok, there is no denying that Watson is not so accessible as of now, but things might change, to ensure it engages the public.


Besides being a social site, Facebook has its own engineers and they’ve been trying to set their voice in the AI world. The social giant has partnered with Microsoft in a machine intelligence project christened ONNX (Open Neural Network Exchange). On its own, Facebook runs a deep learning model called Caffe2, which it uses to identify and auto remove unwanted content from the platform. On the same note, looking at Facebook’s bots, which are helping businesses target their ads to potential clients, it’s clear that the company is deep into AI.


Off cause, this list cannot be complete without mentioning Microsoft. Well, just to be precise Microsoft is still the major software developer the world has ever known. But in recent years, the company is veering towards the artificial brain at an amazing speed. As stated above it has worked with FB (Facebook) on the ONNX project. And among others, it’s worth stating that, Xiaoice, one of Microsoft’s chatbot has seen great success in China.

Source: windowscentral

Other Startups that Have Found a Voice in AI

In summary, there are countless other startups exploiting the AI-world, and they are doing it from an amazing angle. We have NVidia which is using machine learning to create eyes for driverless cars. Affectiva is also targeting the autonomous engine — seeking ways to make cars detect the emotions of its passengers, and many other AI-focused startups not yet noticed.

Magicians Integrate AI To Perform Amazing Tricks

Illusionists and magicians never seize to wow their crowds with awesome performances. The fun is now taking a new turn as magicians incorporate machine learning technology to bring you better tricks. So far the fusion is in the initial stages with more research and test expected to commence.

It’s on this note that the article seeks to give an understanding of how the two work together.

Artificial Intelligence Finds Its Way Into The Magic World

One magician, Tom Webb, showcased how mind-blowing tricks work with artificial intelligence. He went ahead to engage a volunteer and asked Amazon echo to pronounce and pick a card. After this, the card switches position using a drone.


“Using AI to create magic tricks is a great way to demonstrate the possibilities of computer intelligence and it also forms part of our research into the psychology of being the spectator,” stated Webb.

The interesting bit is that artificial learning replicates how the human mind functions. This then allows magicians to make use of it to perfect their tricks and illusions.

Use Of Complex Algorithms To Better Magic Tricks

To get to this point, researchers had to design algorithms that read and interpret human perception. These complex algorithms go through search words keyed in by people while looking for certain items. It’s from this and other subconscious activities that researchers tap into.

The latest development is good news for both parties (scientists and magicians) as they strengthen the bond. Magic becomes fascinating with technology as it hides the ongoings behind the tricks. However, this hasn’t been an easy task considering the limited literature.


According to Brian Curry, magic tricks performed 15 years ago may not be so thrilling at this point. This is because some have been replaced by mobile apps and programs. You can also say that technological advancement will make magic relevant over time.

Magic And Technology Correlation

As early as 1800, magic tricks relied on science and physics. For instance, Jean-Eugene Robert Houdin performed a trick involving electromagnet field to hold a box on the ground. When a child came forward to lift it he did it easily but when an adult tried this he was unable to do the same.

Last year, Tom Webb performed a magic trick on America Got Talent dubbed ‘hacker simulation’. It entailed controlling the phones of his audience while on stage. However, he acknowledges that performing such an act of stage requires lots of practice to perfect.

This case is just but one of the many scenarios where magic benefits from science and technology. On the other end, some analysts refute the relation between the two. They believe that magic is a standalone art that works without the help of science as portrayed by others.

Howard Williams, a co-creator of the Phoney app, identifies the link between the two.


“Computer intelligence can process much larger amounts of information and run through all the possible outcomes in a way that is almost impossible for a person to do on their own.”

“So while a member of the audience might have seen a variation on this trick before, the AI can now use psychological and mathematical principles to create lots of different versions and keep audiences guessing,” says Howard.

A Bright Future For Magic Cum Computer Intelligence

Magicians and computer scientists are enthusiastic about the great potential the collaboration will have. At some stage, magicians will be able to perform optical illusions. An example of an illusion is whereby the appearance of a cabinet is reduced to give room for the performer to fit in.

At the end of the day, what matters most is how the magician packages his or her tricks. As Peter McOwan, Professor at Queen Mary University of London, stipulates “The real magic lies with the magician.”

Why Artificial Intelligence Might Trigger a Nuclear War

It is said, the only way to win a battle with nuclear weapons in play is to not engage at all, well, at least that has been the case for years now since the world II ended. Unfortunately, artificial intelligence has come into the scene and with it; a new study by RAND says AI might help humanity wipe each other using nuclear weapons.

But let’s put artificial intelligence aside for a moment. The greatest regret remains to be how and why man was able to develop weapons of mass destruction. As in, the discovery of nuclear weapons can be said to be one of the biggest mistakes ever done by humanity.

A single launch could wipe an entire city, and based on this capacity and potential, it’s obvious that those who want to control the world have been investing heavily in these weapons. But worst of all is that, now, rival countries are using AI to spy on each other.

The Cost of Using AI to Snoop on Each Other’s Nuclear Inventory

Source: cdn

Initially, developers recommended the use of machine intelligence for simple surveillance or spying, like alerting shop attendants about suspicious moves that may suggest possible shoplifting. That is, AI can raise an alarm in case a shopper wants to leave the premise with unpaid goods, by analyzing the barcode number of the product right in one’s pocket.

But today, governments are adopting this technology to do much serious tasks, and the most targeted role is spying on rival countries. There is a lot to snoop from your opponent as that is among the strategies of winning. In fact, currently, the US is working on over 140 projects intended to boost its surveillance capabilities. Reports also say China and Russia have been doing that already, but the fear is what happens when rivals begin to worry about the capacity of each other?

That’s How and Why AI Might Trigger War

Source: i.kinja

Talking of nuclear weapons in terms of capacity or destructiveness, that’s where the time bomb is, it’s not really about autonomous weapons anymore. It’s about how just knowing that your enemy is more or less equipped that might spark issues. Simple logic, a government that has been lying low, when it manages to snoop into her rival’s weapon inventory and discovers that it now has more weaponry strength and potential, and can actually overpower the opponent, why would it not try to take its “new position?”

In short, AI is giving militaries the capacity to monitor the opponent – to uncover the type of weapons they are investing in, mark its strongholds, and connections to the outside worlds. Such information can trigger itchy fingers and panic, and could lead to the mentality of “we need to stick first because if they begin we’ll not be able to respond to the punch.”

In the study, explaining how computers can trigger a potentially destructive gut feeling, Lt.Col Stanislaw Petrov, sits on his char commander, monitoring how the systems crunched data from radar and satellites looking for signs of a missile launch, in the quest to counter the attacks. In no time, a siren goes off and the button goes red with the demand: “Launch!”

He notices that the purported attacker only has five missiles in their portfolio according to the spy record, and wonders how they could dare the United States with such little capacity. “This must be a false signal, the Lt shouts?” The computer hears that and it insists, “New attack on the way,” but the Lt.Col makes a phone call and confirms, “That’s a false signal, it is not logical,” and the study continues to explain…

Science Fiction or Not, This May Be A Reality Soon

Source: cdn

There is no denying that already, there is an arms race among certain nations, and this was triggered by sayings like, “Whoever wants to rule the world, must lead in AI,” said by Russia’s president early this year. Key technology experts like the late Stephen Hawking, Elon Musk, Jack Ma and Billy Gates might also have helped to create that mentality that artificial intelligence is the only way to rule the world in future.

Back in 2016, Prof. Hawking said AI might be used by a few to oppress or enslave the larger population. In another scenario, he said the technology might advance to a point where it could rule over humanity. Musk, on the other hand, has always reserved his theory that machine intelligence could spark a world war in many ways.

Now, according to the study, the fear in this technology is not just vested in its capability to power autonomous weapons, the issue is, it could also help to raise anxiety, inferiority, and feelings that might trigger itchy imaginations that could tempt people to take wrong options, like the nuclear route. In other words, when a team notices that it doesn’t have what it takes it might result in investing more in these dangerous weapons which ideally makes the world more unstable.

The Other Side of the Coin

While it’s clear how easy this technology could destabilize peace there is also that other side of the coin where it can as well help in ending wrangles. With the same notion, after rivals realize each other’s capability they might not want to engage. As in, they’ll understand that the opponent is able to return a punch harder and that they’ll not have the opportunity to answer back, because they’ll not be there.

While the nuclear tension is rising higher and higher with inventions of new technologies, other scientists hope that AI will help rivals to be afraid of each other and settle with the saying, “The only way to nuclear safety is not to engage at all.”

The CIA is Probing Artificial Intelligence for A Super Spy Job

The good religious book claims that we will reach a time where, all citizens of the earth will be traceable by a number, which will encompass a universal code “666.” And those who’ll not want to register, a system, (maybe manned by the CIA) will be able to instantly track them…and the prophecy goes on.  Well, could artificial intelligence be paving the way for that now?

Because in the near future, the US’s CIA (Central Intelligence Agency) will be able to know everything that’s happening in a target place, country or region in real time without contact on the ground. What does that suggest to be precise? Simple;

CIA’s Next Super Spy Could Be None Other Than AI

Source: verge

That may sound technical, but you know what, already technologies that point to that possibility are making waves. Google is now using satellite imagery to help the aviation, and defense industries, in decision-making. Facebook is also on record for successfully sharing its satellite data with machine learning.

In a different approach, Alibaba is using a face and sound recognition technology to monitor over 10 million pigs with utter accuracy.  Besides that, China’s police have also deployed a system working with ET Brain that can autonomously understand suspicious activities in CCTV clips.

Now, taking a rather special approach, the US’s CIA is currently pursuing nearly 140 projects all aimed at increasing the job of artificial intelligence in surveillance. Its role will be scanning or perusing through videotapes collected from spy and closed-circuit cameras.

Although in a different niche, something close to this is happening. Where scientists were able to devise an algorithm that turns poachers to pray, by reporting their near real-time actions, that is the AI analyzes live videos taken by hovering drones, to alert the game rangers. Connecting this to human security issues, it’s evident that machine intelligence is extending a helping hand.

Digital Surveillance

Source: cdn

Working as the Deputy Director of CIA’s research team (Science and Technology Division,) Dawn Meyerriecks, is mentioned by CNN, explaining how digital surveillance has been successful that it’s now replacing physical tracking. “Led by Singapore, both wireless-based and closed-circuit television digital surveillance has proved very effective in over 30 countries now, the US needs to advance its monitoring tech from such a foundation.”

Meyerriecks also told how a team of experts used unclassified aerial footages of a street – and paired that with machine learning to generate algorithms that can map out strategic virtual placement of cameras in remote capitals, that are not safe for physical access. What this means in simple language, is that we can have surveillance software linked to a satellite and posing as a map of cameras over a target region without a ground contact.

Besides being super effective and highly manageable, because this can be monitored from a single room, the method can also be used to counter-surveillance – which ideally means updating field agents when, and in case they are being surveyed. This way the US will as well be able to guarantee the security of its diplomats in rival countries like Russia or South Korea, as well as reduce the risk of exposing American spies who end up being kidnaped or killed.

Boosting Surveillance Efficiency

The traditional method has been we only revisit the CCTV footages after a crime has already occurred or if lucky enough, pay attention to what is being recorded when something fishy has been noticed. Well, The National Geospatial-Intelligence manages to move a bit beyond that, Robert Cardillo the agency’s head says he sends part of the security team to spend time analyzing the TV monitors, as part of ensuring security. But, it’s obvious, that is not as reliable or efficient as it ought to be still –the team might sleep on the job.

Mr. Cardillo supports the idea of using AI to analyze what’s going on in a scene because the machine is destined to remain active, compared to humans. Top of that, the algorithms have proved that they can analyze, to detect patterns of events in videos, and alert the security team about suspicious images or moves automatically, for further action.

Contact Person Details

Service Requirements

Select Services

One time service request.Subscription service.Simple purchase of a service/product.