The Impact of Artificial Intelligence on Society – Fagen wasanni

This summer, artificial intelligence (AI) demonstrated its remarkable capability by extracting John Lennons voice from a demo song recorded shortly before his death in 1980. By removing the electrical buzzing and piano accompaniment, AI successfully mixed Lennons voice into a final Beatles project led by Paul McCartney.

The ability of AI to recognize distinctive human voices has captivated the attention of many. However, it has also raised concerns about the potential impact of this powerful tool. Like any tool, the impact of AI depends on the intentions of the user. While it has many beneficial uses in our daily lives, such as grammar autocorrect and real-time navigation on smartphones, there is also the possibility of AI being manipulated for malicious purposes.

Instances of AI impersonating individuals for nefarious reasons have already occurred. For example, a mother in Arizona received a convincing AI-engineered recording of her daughter screaming that she had been kidnapped. The perpetrator threatened to harm the girl if a ransom was not paid. Fortunately, it was later discovered that the girl was safe at a skiing competition, but this incident highlights the potential dangers of AI.

These contrasting stories of AIs applications underscore the need for responsible use and regulation of this technology. While international gatekeepers work towards encouraging responsible AI utilization and preventing its abuses, it is essential for individuals to understand the implications and impact of AI in their daily lives.

Taking the time to understand ourselves and others on a deeper level through traditional means is crucial. A chance encounter between strangers, as witnessed during a family reunion, demonstrated how people from different backgrounds and worlds can connect through simple gestures. Moreover, taking the time to pay attention to nonverbal cues and support those with special needs, like the authors son, fosters true understanding and communication.

Additionally, AI can assist in organizing and finding relevant photos, as demonstrated by face recognition technology. However, there will always be a significant difference between recognizing someones face and cherishing the connection and memories associated with that individual.

In conclusion, while AI has undoubtedly shown its potential for innovation and discovery, it is crucial to exercise caution and responsible usage to prevent any negative consequences. Balancing the benefits of AI with human connection and understanding is key to ensuring a harmonious coexistence with this technology.

Continue reading here:

The Impact of Artificial Intelligence on Society - Fagen wasanni

Artificial Intelligence and the Perception of Dogs’ Ears – Fagen wasanni

The use of generative artificial intelligence in the world of art has sparked mixed reactions. Photographer Sophie Gamand recently explored how AI views dogs ears in her project featuring shelter dogs with cropped ears. Surprisingly, the AI algorithms leaned towards the belief that dogs should have floppy ears, despite the existence of breed standards and human preferences for cropped ears.

Using her own photographs of shelter dogs, many of which had severely shortened ears, Gamand aimed to restore their ears through AI technology. She utilized the DALL-E 2 program to understand how AI perceives a dogs appearance. Although the process was occasionally frustrating, Gamand wanted to minimize her interference to truly explore what the computer thought a dog should look like. It turned out that AI considers dogs to have intact ears.

Gamand believes that AI has the potential to separate genuine artists from those who rely too heavily on the technology. While AI can create stunning images, it is crucial for artists to consider their own artistic context, aesthetics, and the messages they want to convey. The use of AI should align with an artists overall vision and not solely rely on the work of others.

The ear cropping project is just one example of Gamand using AI in her work. She has also transformed AI interpretations of dogs into oil paintings and used ChatGTP to craft a letter from a shelter dog to its previous owner. Despite the benefits of AI, Gamand emphasizes the importance of ethical and honest artistic practices with this technology.

Gamands photography focuses on raising awareness for misunderstood dog breeds and animals in shelters. She has dedicated her time to volunteering at shelters across the United States and has successfully fundraised for animal shelters through her Instagram feed. Gamand believes that photographs have the power to create emotional connections between adoptable animals and potential pet owners.

Through her artwork, Gamand aims to reflect on humanity by observing dogs. However, sometimes the mirror reveals uncomfortable truths, such as the prevalence of ear cropping. She questions why certain breeds continue to undergo this procedure for aesthetic reasons, even though they are living safely as family pets. Gamand believes this reflects a broader issue in our relationship with dogs and the natural world, highlighting the need for better understanding and decision-making on behalf of our companions.

Read the original here:

Artificial Intelligence and the Perception of Dogs' Ears - Fagen wasanni

The Elements of AI: Free Online Course on Artificial Intelligence – Fagen wasanni

The field of artificial intelligence (AI) has revolutionized various aspects of our lives, enabling machines to perform tasks that were previously exclusive to human intelligence. However, along with the countless opportunities that this technological revolution has brought, there are also ethical, security, and regulatory challenges to navigate. To address this pressing need, an online initiative called Elements of AI has been created.

Elements of AI is a collaboration between Reaktor Inc. and the University of Helsinki, and it offers an online course that provides a solid foundation for understanding AI. The course is presented online and free of charge, making it accessible to anyone interested in delving into the fascinating world of AI.

The course is divided into two parts. The first section, Introduction to AI, introduces participants to the core concepts of AI. This module is designed for beginners who have no prior knowledge of AI. The second section, Creating AI, is aimed at individuals with basic programming skills in Python. In this phase of the course, participants explore how to build practical AI applications and delve into the capabilities of this disruptive technology.

Upon completing the course, participants receive an Artificial Intelligence certification, which not only enriches their knowledge but also adds professional credibility. In a competitive and rapidly evolving job market, this certification serves as a mark of quality and competence.

Since its launch in May 2018, Elements of AI has had over 140,000 subscriptions from more than 90 countries worldwide. The vision behind this course is to inspire, educate, and promote well-being through knowledge. It has been praised by Sundar Pichai, CEO of Google, as an inspiring example that levels the playing field and allows more people to benefit from the advances of AI.

See original here:

The Elements of AI: Free Online Course on Artificial Intelligence - Fagen wasanni

Artificial Intelligence Used to Create Image of Former CM in … – Fagen wasanni

As the popular flower show at Lal Bagh botanical gardens in Bangalore kicks off, a multimedia company called Maya Films is utilizing Artificial Intelligence (AI) to create images. This year, the flower show pays tribute to Kengal Hanumanthaiah, a former Chief Minister of Karnataka. Maya Films will use AI to generate an image of the late CM taking a stroll in Lal Bagh.

Kengal Hanumanthaiah served as Karnatakas second chief minister from 1952 to 1956 and was known for his involvement in the construction of Vidhana Soudha. During his free time, Hanumanthaiah enjoyed walking inside Lal Bagh, but there is no recorded image of him inside the park. In honor of his contributions, Maya Films decided to recreate the scene using AI.

The theme of this years flower show is the Vidhana Soudha, which is the seat of the state legislature in Karnataka. To complement the tribute to Hanumanthaiah, a replica of Vidhana Soudha has been erected with flowers next to his statue inside Lal Bagh. The flower show is conducted twice a year, and Karnataka Chief Minister Siddaramaiah inaugurated the event.

The use of AI in creating the image of the former CM showcases the potential of technology in the field of multimedia. By harnessing AI, Maya Films aims to give visitors the opportunity to witness history and experience the presence of Hanumanthaiah in Lal Bagh during the 1950s. This innovative project serves as a testament to how AI can be utilized in creative ways to enhance our understanding of the past.

More:

Artificial Intelligence Used to Create Image of Former CM in ... - Fagen wasanni

Artificial Intelligence Chatbots are Known to Spout Falsehoods – Fagen wasanni

Artificial intelligence chatbots, including OpenAIs ChatGPT and Anthropics Claude 2, have been found to produce false information, leading to concerns among businesses, organizations, and students using these systems. The issue of generating inaccurate information, described as hallucination or confabulation, poses challenges for tasks that require reliable document composition and work completion. Developers of large language models, such as ChatGPT-maker OpenAI and Anthropic, acknowledge the problem and are actively working to improve the truthfulness of their AI systems.

However, experts question whether these models will ever reach a level of accuracy that would allow them to safely provide medical advice or perform other critical tasks. Linguistics professor Emily Bender suggests that the mismatch between the technology and its proposed use cases makes it inherently unfixable. The reliability of generative AI technology is crucial, as it is projected to contribute trillions of dollars to the global economy.

The use of generative AI extends beyond chatbots and includes technology that can generate images, videos, music, and computer code. Accuracy is particularly important in applications like news-writing AI products and recipe generation. For example, a single hallucinated ingredient in a recipe could lead to an inedible meal. Partnerships between AI developers like OpenAI and news organizations like the Associated Press highlight the significance of accurate language generation.

While the CEO of OpenAI, Sam Altman, expresses optimism about addressing the hallucination problem, experts like Emily Bender believe that improvements in language models wont be sufficient. Language models are designed to model the likelihood of different word strings, making them adept at mimicking writing styles but prone to errors and failure modes.

Despite potential accuracy issues, marketing firms find value in chatbots that produce creative ideas and unique perspectives. The Texas-based startup Jasper AI collaborates with OpenAI, Anthropic, Google, and Meta (formerly Facebook) to offer AI language models tailored to clients specific requirements, including accuracy and security concerns.

Addressing the challenges of hallucination and improving the reliability of AI chatbots and language models will contribute to their widespread and trustworthy use for various applications.

Read more here:

Artificial Intelligence Chatbots are Known to Spout Falsehoods - Fagen wasanni

The Role of Big Data and Artificial Intelligence in Asia Pacific … – Fagen wasanni

Exploring the Impact of Big Data and Artificial Intelligence on Asia Pacific Hospital Information Systems

The role of Big Data and Artificial Intelligence (AI) in Asia Pacific Hospital Information Systems is rapidly evolving, transforming the healthcare landscape in unprecedented ways. This shift is driven by the need to improve patient care, streamline operations, and enhance decision-making processes in healthcare institutions.

Big Data, a term that refers to the vast amount of data generated every second, is being harnessed by hospitals to gain insights into patient health, disease patterns, and treatment outcomes. This data, which can range from patient records to real-time monitoring of vital signs, is analyzed to identify trends, predict outcomes, and inform treatment plans. For instance, in Singapore, the use of Big Data in healthcare has led to the development of predictive models that can forecast disease outbreaks, enabling authorities to take proactive measures.

Artificial Intelligence, on the other hand, is being used to automate routine tasks, analyze complex medical data, and even assist in diagnosis and treatment. In Japan, AI is being integrated into hospital information systems to help doctors interpret medical images, reducing the time taken to diagnose conditions such as cancer. Similarly, in China, AI algorithms are being used to analyze electronic health records to predict patient readmission rates, helping hospitals to manage resources more effectively.

The integration of Big Data and AI into hospital information systems is not without challenges. Data privacy and security are major concerns, especially given the sensitive nature of health information. Hospitals must ensure that they have robust systems in place to protect patient data from breaches and misuse. Additionally, the lack of standardized data formats can hinder the effective use of Big Data, while the complexity of medical data can pose challenges for AI algorithms.

Despite these challenges, the potential benefits of Big Data and AI in healthcare are immense. They can lead to more accurate diagnoses, personalized treatment plans, and improved patient outcomes. Moreover, they can help healthcare providers to identify inefficiencies, reduce costs, and improve the quality of care.

In the Asia Pacific region, governments are recognizing the potential of Big Data and AI in healthcare and are taking steps to foster their adoption. For example, the Australian government has launched a national strategy to harness the power of AI in healthcare, while the Indian government has initiated a program to promote the use of Big Data in public health.

The role of Big Data and AI in Asia Pacific Hospital Information Systems is set to grow in the coming years. As technology advances and more data becomes available, these tools will become increasingly integral to healthcare delivery. However, it is crucial that hospitals navigate the challenges associated with their use and ensure that they are used ethically and responsibly.

In conclusion, the impact of Big Data and AI on Asia Pacific Hospital Information Systems is profound, offering opportunities to revolutionize healthcare delivery. By harnessing these technologies, hospitals can improve patient care, streamline operations, and make more informed decisions. However, to fully realize these benefits, hospitals must address the challenges associated with their use and ensure that they are used in a way that respects patient privacy and promotes trust.

Continue reading here:

The Role of Big Data and Artificial Intelligence in Asia Pacific ... - Fagen wasanni

nGrow.ai: Revolutionizing Business Operations with Artificial … – Fagen wasanni

nGrow.ai is an artificial intelligence (AI) platform that is transforming the way companies optimize their business operations. With its wide range of features and functions, nGrow.ai automates and streamlines tasks and processes, increasing overall efficiency and productivity.

The platform offers various use cases and features that can be customized to meet the specific needs of each company. For example, e-commerce companies can automate inventory management to reduce errors and ensure sufficient stock levels. Customer service can be improved by automating responses to common inquiries, reducing wait times and enhancing customer satisfaction.

nGrow.ai also provides the capability to create custom dashboards that offer real-time insights into business operations. These dashboards display key metrics such as employee performance, project status, and sales, enabling managers to make informed decisions based on up-to-date data. AI algorithms generate actionable insights to help identify growth opportunities and strategies to improve operational efficiency.

The main advantage of using nGrow.ai is the ability to automate and optimize operations. Advanced AI algorithms handle repetitive and tedious tasks quickly and accurately, freeing up employees to focus on higher-value activities. This automation saves time and reduces human error, ultimately improving efficiency and saving costs.

In addition to automation, nGrow.ai offers tools to optimize existing operations by analyzing workflows and identifying areas for improvement. Detailed analytics provide a comprehensive view of how operations are performing, empowering companies to make informed adjustments and maximize efficiency.

One standout feature of nGrow.ai is the creation of custom dashboards, which provide real-time insights tailored to each businesss specific needs. Managers can track daily sales, revenue, and individual team member performance, enabling them to make quick decisions to improve sales and optimize performance.

The platforms AI-powered insights are also instrumental in identifying growth opportunities. By analyzing data and spotting hidden patterns and trends, nGrow.ai helps companies tap into new markets, customer segments, or products that may have been overlooked.

nGrow.ai not only saves time and money but also provides comprehensive analytics and reports to identify areas for improvement. By taking corrective action based on this information, companies can streamline workflows, reduce downtime, and errors.

While nGrow.ai offers numerous advantages, there are a few drawbacks to consider. The platform may require a learning curve to fully utilize all its features, and it might be expensive for small businesses with limited budgets.

In conclusion, nGrow.ai is an AI platform that revolutionizes business operations. With features such as custom dashboard creation, AI-powered insights, and in-depth analytics, it offers a comprehensive solution to improve efficiency and performance. By leveraging nGrow.ai, companies can save time and money, identify growth opportunities, and maximize operational efficiencies.

Go here to read the rest:

nGrow.ai: Revolutionizing Business Operations with Artificial ... - Fagen wasanni

Driving Forces Behind the Expansion of Artificial Intelligence (AI) in … – Fagen wasanni

The Artificial Intelligence (AI) in Fintech Market is experiencing significant growth due to various driving forces. Technological breakthroughs have revolutionized the sector, making it possible to create new goods and services. Alongside this, changing consumer preferences and increased consumer awareness of AI in Fintech have driven demand. Supportive policies and favorable government laws have also encouraged industry growth and investment.

Furthermore, the sector has benefited from access to new markets and clientele through smart alliances and partnerships. These driving forces are working together to propel the Artificial Intelligence (AI) in Fintech Market to new heights, with a positive outlook for continued expansion in the coming years.

The global AI in Fintech Market is expected to experience steady growth in the coming years. This growth will be driven by continuous technological advancements, growing environmental awareness, and the rising need for streamlined operations. To seize the market opportunities, industry players are anticipated to focus on product innovation, strategic collaborations, and geographical expansion.

The market report includes profiles of leading companies operating in the AI in Fintech Market, such as Autodesk, IBM, Microsoft, Oracle, SAP, and Fanuc, among others. The report reveals key market methods that can assist businesses in leveraging their position in the market and diversifying their product range.

The report provides valuable insights into market growth based on in-depth primary and secondary data collection. It also categorizes the AI in Fintech Market based on type, including hardware, software, and services, and application, such as customer service, credit scores, insurance support, and financial market prediction.

The segmentation of the market allows for a more targeted analysis of specific market segments, helping businesses make informed decisions and tailor their strategies accordingly. With comprehensive market insights, in-depth industry analysis, accurate market sizing and forecasting data, and a focus on emerging trends and innovations, this report provides businesses with valuable foresight and a competitive edge in the AI in Fintech Market.

Read this article:

Driving Forces Behind the Expansion of Artificial Intelligence (AI) in ... - Fagen wasanni

Those Three Clever Dogs Trained To Drive A Car Provide Valuable Lessons For AI Self-Driving Cars – Forbes

Perhaps this dog would prefer driving the car, just like three dogs that were trained to do so.

Weve all seen dogs traveling in cars, including how they like to peek out an open window and enjoy the fur-fluffing breeze and dwell in the cacophony of scents that blow along in the flavorful wind.

The life of a dog!

Dogs have also frequently been used as living props in commercials for cars, pretending in some cases to drive a car, such as the Subaru Barkleys advertising campaign that initially launched on TV in 2018 and continued in 2019, proclaiming that Subaru cars were officially dog tested and dog approved.

Cute, clever, and memorable.

What you might not know or might not remember is that there were three dogs that were trained on driving a car and had their moment of unveiling in December of 2012 when they were showcased by driving a car on an outdoor track (the YouTube posted video has amassed millions of views).

Yes, three dogs named Monty, Ginny, and Porter were destined to become the first true car drivers on behalf of the entire canine family.

Monty at the time was an 18-month-old giant schnauzer cross, while the slightly younger Ginny at one year of age was a beardie whippet cross, and Porter was a youthful 10-month-old beardie.

All three were the brave astronauts of their era and were chosen to not land on the moon but be the first to actively drive a car, doing so with their very own paws.

I suppose we ought to call them dog-o-nauts.

You might be wondering whether it was all faked.

I can guess that some might certainly think so, especially those that already believe that the 1969 moon landing was faked, and thus dogs driving a car would presumably be a piece of cake to fake in comparison.

The dog driving feat was not faked.

Well, lets put it this way, the effort was truthful in the sense that the dogs were indeed able to drive a car, albeit with some notable constraints involved.

Lets consider some of the caveats:

Specially Equipped Driving Controls

First, the car was equipped with specialized driving controls to allow the dogs to work the driving actions needed to steer the car, use the gas, shift gears, and apply the brakes of the vehicle.

The front paws of the dog driver were able to reach the steering wheel and gear-stick, while the back paws used extension levers to reach the accelerator and brake pedals. When a dog sat in the drivers seat, they did so on their haunches.

Of course, I dont think any of us would be hard-pressed to quibble about the use of specialized driving controls. I hope that establishing physical mechanisms to operate the driving controls would seem quite reasonable and not out of sorts per se.

We should willingly concede that having such accouterments is perfectly okay since its not the access to the controls that ascertains driving acumen but instead the ability to appropriately use the driving controls that are the core consideration.

By the way, the fact too that they operated the gear shift is something of a mind-blowing nature, particularly when you consider that most of todays teenage drivers have never worked a stick shift and always used only an automatic transmission.

Dogs surpass teenage drivers in the gear-stick realm, it seems.

Specialized Training On How To Drive

Secondly, as another caveat, the dogs were given about 8 weeks of training on how to drive a car.

I dont believe you can carp about the training time and need to realize that teenagers oftentimes receive weeks or even months of driving training, doing so prior to being able to drive a car on their own.

When you think about it, an 8-week or roughly two-month time frame to train a dog on nearly any complex task is remarkably short and illustrates how smart these dogs were.

One does wonder how many treats were given out during that training period, but I digress.

Focused On Distinct Driving Behaviors

Thirdly, the dogs learned ten distinct behaviors for purposes of driving.

For example, one behavior consisted of shifting the car into gear. Another behavior involved applying the brakes. And so on.

You might ponder this aspect for a moment.

How many distinct tasks are involved in the physical act of driving a car?

After some reflection, youll realize that in some ways the driving of a car is extremely simplistic.

You need to steer, turning the wheel either to the left, right, or keep it straight ahead. In addition, you need to be able to use the accelerator, either pressing lightly or strongly, and you need to use the brake, either pressing lightly or strongly. Plus, well toss into the mix the need to shift gears.

In short, driving a car does not involve an exhaustive and nor complicated myriad of actions.

It makes sense that weve inexorably devolved car driving into a small set of simple chores.

Early versions of cars had many convoluted tasks that had to be manually undertaken. Over time, the automakers aimed to make car driving so simple that anyone could do it.

This aided the widespread adoption of cars by the populous as a whole and led to the blossoming of the automotive industry by being able to sell a car to essentially anyone.

Driving On Command

Fourth, and the most crucial of the caveats, the dogs were commanded by a trainer during the driving act.

I hate to say it, but this caveat is the one that regrettably undermines the wonderment and imagery of the dogs driving a car.

Sorry.

A trainer stood outside the car and yelled commands to the dogs, telling them to shift gears or to steer to the right, etc.

Okay, lets all agree that the dogs were actively driving the car, and working the controls of the car, and serving as the captain of the ship in that they alone were responsible for the car as it proceeded along on the outdoor track. They were even wearing seat-belts, for gosh sake.

Thats quite amazing!

On the other hand, they were only responding to the commands being uttered toward them.

Thus, the dogs werent driving the car in the sense that the dogs were presumably not gauging the roadway scenery and nor mentally calculating what driving actions to undertake.

It would be somewhat akin to putting a human driver blindfolded into a drivers seat and asking them to drive, along with you sitting next to the driver and telling them what actions to take.

Yes, technically, the person would be the driver of the car, though I believe wed all agree they werent driving in the purest sense of the meaning of driving.

By and large, driving a car in its fullest definition consists of being able to assess the scene around the vehicle and render life-or-death judgments about what driving actions to take. Those mental judgments are then translated into our physical manipulation of the driving controls, such as opting to hit the gas or slam on the brakes.

One must presume that the dogs were not capable of doing the full driving act and were instead like the blindfolded human driver that merely is reacting to commands given to them.

Does this mean that those dogs werent driving the car?

I suppose it depends upon how strict you want to be about the definition of driving.

If you are a stickler, you would likely cry foul and assert that the dogs were not driving a car.

If you are someone with a bit more leniency, you probably would concede that the dogs were driving a car, and then under your breath and with a wee bit of a smile mutter that they were determinedly and doggedly driving that car.

Perhaps we shouldnt be overly dogmatic about it.

You might also be wondering whether a dog could really, in fact, drive a car, doing so in the fuller sense of driving, if the dog perchance was given sufficient training to do so.

In other words, would a dog have the mental capacity to grasp the roadway status and be able to convert that into suitable driving actions, upon which then the dog would work the driving controls?

At this juncture in the evolution of dogs, one would generally have to say no, namely that a dog would not be able to drive a car in a generalized way.

That being said, it would potentially be feasible to train a dog to drive a car in a constrained environment whereby the roadway scenery was restricted, and the dog did not need to broadly undertake a wholly unconstrained driving task.

Before I dig more deeply into this topic herein, please do not try placing your beloved dog into the drivers seat of your car and force them to drive.

Essentially, Im imploring you, dont try this at home.

I mention this warning because I dont want people to suddenly get excited about tossing their dog into the drivers seat to see what happens.

Bad idea.

Dont do it.

As mentioned, the three driving dogs were specially trained, and drove only on a closed-off outdoor track, doing so under the strict supervision of their human trainers and with all kinds of safety precautions being undertaken.

The whole matter was accomplished by the Royal New Zealand Society for the Prevention of Cruelty to Animals (SPCA), done as a publicity stunt that aimed to increase the adoption of neglected or forgotten dogs.

It was a heartwarming effort with a decent basis and please dont extrapolate the matter into any unbecoming and likely dangerous replicative efforts.

Speaking of shifting gears, one might wonder whether the dogs that drove a car might provide other insights to us humans.

Heres todays question: What lessons if any can be learned by dogs driving cars that could be useful for the advent of AI-based true self-driving cars?

Lets unpack the matter and see.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public be forewarned about a disturbing aspect thats been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Spiritual-Moral Values

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

If thats the case, it seems like theres no opportunity for dogs to drive cars.

Yes, thats true, namely that if humans arent driving cars then there seems little need or basis to ask dogs to drive cars.

But thats not what we can learn from the effort to teach dogs to drive a car.

Lets tackle some interesting facets that arose when dogs were tasked with driving a car:

Humans Giving Commands

First, recall that the dogs were responding to commands that were given to them while sitting at the steering wheel.

In a manner of speaking (pun intended), you could suggest that we humans will be giving commands to the AI driving systems that are at the wheel of true self-driving cars.

Using Natural Language Processing (NLP), akin to how you converse with Alexa or Siri, as a passenger in a self-driving car you will instruct the AI about various aspects of the driving.

In theory, you wont though be telling the AI to hit the gas or pound on the brakes. Presumably, the AI driving system will be adept enough to handle all of the everyday driving aspects involved and its not your place to offer commands about doing the driving chore.

Instead, youll tell the AI where you want to go.

You might divert the journey by suddenly telling the AI that you are hungry and want to swing through a local McDonalds or Taco Bell.

You might explain to the AI that it can drive leisurely and take you through the scenic part of town since you arent in a hurry and are a tourist in the town or city.

In some ways, you can impact the driving task, perhaps telling the AI that you are carsick and want it to slow down or not take curves so fast.

There are numerous open questions as yet resolved about the interaction between the human passengers and the AI driving systems (see my detailed discussion at this link here).

For example, if you tell the AI to follow that car, similar to what happens in movies or when you are trying to chase after someone, should the AI obediently do so, or should it question why you want to follow the other car?

We dont presumably want AI self-driving cars that are stalking others.

See the original post here:

Those Three Clever Dogs Trained To Drive A Car Provide Valuable Lessons For AI Self-Driving Cars - Forbes

Drive.ai raises $50M for retrofit kits to bring self-driving to existing fleets – TechCrunch

Self-driving technology startup Drive.ai has raised a $50 million Series B funding round, led by NEA and with participation from GGV and previous investors, including Series A lead Northern Light. The new funding will help the company pursue its evolved business strategy, which now focuses on creating retrofit kits that can be used to add self-driving capabilities to existing commercial and business vehicle fleets.

The Drive.ai approach to self-driving tech is based on development and use of deep learning for all aspects of the platform, which the company says will help it achieve better development pace, scalability and efficiency gains. Many others in the field use a hybrid approach, applying deep learning in certain areas but not others, but Drive.ai believes the true gains are best achieved by using it throughout the autonomous system stack.

The last time I spoke to Drive.ai, which was founded by a team from Stanfords AI lab, they werent yet talking about retrofit kits, and were instead focused on developing a self-driving car that also had a strong focus on intelligent and intuitive communication with the surrounding world. In an interview about this funding, Drive.ai co-founder and CEO Sameep Tandon explains the shift, while noting that communication is still a core aspect of their focus.

What we build at Drive.ai, you can think of it as an AI brain, and all those parts that are required to remove the human driver from the vehicle, he said. So we focus on Level 4 autonomous driving. A huge part of that is once you remove the human driver from the vehicle, how these vehicles will interact with people in the real world, and build their trust and depict their intentions. Thats something that we is absolutely critical to the safe deployment of autonomous technology.

Drive.ais retrofit kits employ off-the-shelf hardware, including radar and LiDAR, and the startup focuses on building the autonomous software platform that brings all those aspects together to make the self-driving magic happen. With this funding, the company will focus on launching its first pilots, which its aiming to start later this year, and on international expansion.

Retrofit options are definitely going to be attractive to any fleet operators who have a large pool of existing vehicles and arent eager to throw out that investment and buy all new cars when autonomy becomes the norm. But retrofits are typically costly and difficult, so I asked Tandon just how plug-and-play Drive.ais kits will actually be.

The retrofit kits are intended to be for business fleets, so its not intended to be something a consumer can install, itll take a little bit of integration, he said. But its intended to make it relatively quick to retrofit a large fleet.

Alongside this funding, Drive.ai is also adding two new Directors to its Board, including NEA chairman and head of Asia Carmen Chang, and Coursera co-founder and Google and Baidu AI alum Andrew Ng. Both should help Drive.ai with market expansion plans, particularly thanks to their experience with China, and Ngs AI bona fides are very highly regarded across the industry.

See the original post here:

Drive.ai raises $50M for retrofit kits to bring self-driving to existing fleets - TechCrunch

Augusta Health has saved 282 lives with AI-infused sepsis early warning system – Healthcare IT News

In Virginia, the statewide mortality rate for sepsis was 13.2% in 2016. Sepsis is the bodys life-threatening response to infection that can lead to tissue damage and organ failure. In the U.S., 1.5 million people develop sepsis each year, and about 17% of those die. Early detection of sepsis is critical to decrease mortality.

THE PROBLEM

Clinical and IT staffs at Augusta Health, an independent, community-owned, not-for-profit hospital in Virginia, knew that studies have shown that though treatments are available in a general hospital setting, they are rarely completed in a timely manner.

Our nurses are highly trained and are skilled at detecting early symptoms of sepsis based on standard indicators, but they are also very busy, said Penny Cooper, a data scientist at Augusta Health. Aware of how many patients our nurses care for and the many tasks nurses juggle at once, leadership formed a sepsis taskforce with the goal of providing staff with the resources to identify symptoms of sepsis sooner.

PROPOSAL

For sepsis, its all about early detection. Mortality from sepsis increases by as much as 8% for every hour that treatment is delayed.

By identifying sepsis early in the process, we have a much better chance of treating the infection before it goes too far, Cooper said. In addition, early identification of sepsis remains the greatest barrier to compliance with recommended evidenced based bundles.

Penny Cooper, Augusta Health

So Augusta Health decided to develop a system with Vocera Communications that provides an extra set of eyes that automatically reviews data. By reviewing the data electronically, staff is able to recognize symptoms earlier and provide automated alerts to the bedside nurses.

MARKETPLACE

On the clinical communications technology front, vendors include Avaya, Halo, HipLink Software, Mobile Heartbeat, PatientSafe Solutions, PerfectServe, Spok, Telmediq and Vocera.

MEETING THE CHALLENGE

Clinicians are familiar with the standard SIRS (Systemic Inflammatory Response Syndrome) criteria: Temp >38C; heart rate >90; respiratory rate >20; abnormal white blood cells.

But we wanted to see if we could increase the sensitivity by adding additional variables, so we began with the inpatient population and a retrospective study, Cooper explained.

We started with standard SIRS criteria but also added Mean Arterial Pressure and Shock Index, she added. By adding additional variables MAP and Shock Index as categorical variable to the Logistic Regression Analysis, we were able to increase the overall c statistic by .07. The c stat is a measurement of how well your test performs.

Staff run an automated process that collects information from the Rauland Bed System and clinical data from the EHR for all current inpatients.

The data then is compiled and analyzed and a score is assigned based on the results of the retrospective study, Cooper noted. This occurs every hour for every inpatient. Then for patients with a score greater than the cutoff, an alert is sent to the attending and charge nurse on their Vocera devices.

The sepsis communication system was developed in-house and is an example of interoperability between different healthcare information systems. The process runs in the background so no one actually interacts with the process unless their patient is alerted and requires screening.

By using artificial intelligence capabilities, we are able to screen 100% of our inpatient population and deliver results directly to the caregiver wherever they are in the hospital without any manual intervention, Cooper explained. Core measure requirements along with patient impact and historical volume trends also makes the introduction of this tool relevant and timely.

RESULTS

The sepsis mortality rate at August Health now is 4.8%, compared with 13.2% at the state level.

The work done by our teams at Augusta Health to reduce mortality rates from sepsis has been a collaborative effort, Cooper said.

We automated a process to screen all of our patients without any manual intervention but most importantly we have saved lives. By subtracting the actual mortality from the expected mortality rate, we estimate a total of 282 lives have been saved.

Health Quality Innovators named the hospital the Health Quality Innovator for Virginia in the category of Data-driven Care in 2018. U.S. News & World Report recognized Augusta Health as a Best Hospital in the Shenandoah Valley for 2019-20.

ADVICE FOR OTHERS

System interoperability is key to ensure that the right data gets to the right clinicians at the right time and without any manual effort, Cooper advised. While many EHRs may alert the clinician of sepsis within the medical record, this process delivers the alert to the communications device of the bedside nurse.

With the input from clinicians and quality staff, the process was not overly difficult to accomplish, she added. Staff developed the system using off-the-shelf tools, including SQL Server and SQL Server Integration Services.

The use of the model within the study facility has resulted in a culture change, she said. A review at a daily safety huddle prompts proactive rounding by the nursing directors. This process provides additional support that nursing staff may need to get patients to the appropriate level of care. For the immediate future, we plan to continue the use of the model within our facility, re-evaluate it from both an operational and clinical standpoint, and modify as appropriate.

Twitter:@SiwickiHealthITEmail the writer:bill.siwicki@himssmedia.comHealthcare IT News is a HIMSS Media publication.

See the article here:

Augusta Health has saved 282 lives with AI-infused sepsis early warning system - Healthcare IT News

Menten AIs combination of buzzword bingo brings AI and quantum computing to drug discovery – TechCrunch

Menten AI has an impressive founding team and a pitch that combines some of the hottest trends in tech to pursue one of the biggest problems in healthcare new drug discovery. The company is also $4 million richer with a seed investment from firms including Uncork Capital and Khosla Ventures to build out its business.

Menten AIs pitch to investors was the combination of quantum computing and machine learning to discover new drugs that sit between small molecules and large biologics, according to the companys co-founder Hans Melo.

A graduate of the Y Combinator accelerator, which also participated in the round alongside Social Impact Capital*, Menten AI looks to design proteins from scratch. Its a heavier lift than some might expect, because, as Melo said in an interview, it takes a lot of work to make an actual drug.

Menten AI is working with peptides, which are strings of amino acid chains similar to proteins that have the potential to slow aging, reduce inflammation and get rid of pathogens in the body.

As a drug modality [peptides] are quite new, says Melo. Until recently it was really hard to design them computationally and people tried to focus on genetically modifying them.

Peptides have the benefit of getting through membranes and into cells where they can combine with targets that are too large for small molecules, according to Melo.

Most drug targets are not addressable with either small molecules or biologics, according to Melo, which means theres a huge untapped potential market for peptide therapies.

Menten AI is already working on a COVID-19 therapeutic, although the companys young chief executive declined to disclose too many details about it. Another area of interest is in neurological disorders, where the founding team members have some expertise.

Image of peptide molecules. Image Courtesy: D-Wave

While Menten AIs targets are interesting, the approach that the company is taking, using quantum computing to potentially drive down the cost and accelerate the time to market, is equally compelling for investors.

Its also unproven. Right now, there isnt a quantum advantage to using the novel computing technology versus traditional computing. Something that Melo freely admits.

Were not claiming a quantum advantage, but were not claiming a quantum disadvantage, is the way the young entrepreneur puts it. We have come up with a different way of solving the problem that may scale better. We havent proven an advantage.

Still, the company is an early indicator of the kinds of services quantum computing could offer, and its with that in mind that Menten AI partnered with some of the leading independent quantum computing companies, D-Wave and Rigetti Computing, to work on applications of their technology.

The emphasis on quantum computing also differentiates it from larger publicly traded competitors like Schrdinger and Codexis.

So does the pedigree of its founding team, according to Uncork Capital investor, Jeff Clavier. Its really the unique team that they formed, Clavier said of his decision to invest in the early-stage company. Theres Hans the CEO who is more on the quantum side; theres Tamas [Gorbe] on the bio side and theres Vikram [Mulligan] who developed the research. Its kind of a unique fantastic team that came together to work on the opportunity.

Clavier has also acknowledged the possibility that it might not work.

Can they really produce anything interesting at the end? he asked. Its still an early-stage company and we may fall flat on our face or they may come up with really new ways to make new peptides.

Its probably not a bad idea to take a bet on Melo, who worked with Mulligan, a researcher from the Flatiron Institute focused on computational biology, to produce some of the early research into the creation of new peptides using D-Waves quantum computing.

Novel peptide structures created using D-Waves quantum computers. Image Courtesy: D-Wave

While Melo and Mulligan were the initial researchers working on the technology that would become Menten AI, Gorbe was added to the founding team to get the company some exposure into the world of chemistry and enzymatic applications for its new virtual protein manufacturing technology.

The gamble paid off in the form of pilot projects (also undisclosed) that focus on the development of enzymes for agricultural applications and pharmaceuticals.

At the end of the day what theyre doing is theyre using advanced computing to figure out what is the optimal placement of those clinical compounds in a way that is less based on those sensitive tests and more bound on those theories, said Clavier.

*This post was updated to add that Social Impact Capital invested in the round. Khosla, Social Impact, and Uncork each invested $1 million into Menten AI.

More here:

Menten AIs combination of buzzword bingo brings AI and quantum computing to drug discovery - TechCrunch

AI likes to do bad things. Here’s how scientists are stopping it from scamming you – SYFY WIRE

The robots arent taking over yet, but sometimes, they can get a little out of control.

AI apparently has a bias toward making unethical choices. This tends to spike in commercial situations, and nobody wants to get scammed by a bot. Some types of artificial intelligence even choose disproportionately when it comes to things like setting insurance prices for particular customers (yikes). Though there are many potential strategies a program can choose from, it needs to be prevented from going straight to the unethical ones. An international team of scientistshave now come up with a formula that explains why this is and are now working to combat crime by computer brain.

In an environment in which decisions are increasingly made without human intervention, there is therefore a strong incentive to know under what circumstances AI systems might adopt unethical strategies, thescientists said in a study recently published in Royal Society Open Science.

Even if there arent that many possible unethical strategies an AI program can pick up, that doesn't lessen the possibility of it picking something shady. Figuring out prices for car insurance can be tricky, since things like past accidents and points on your license have to be factored in. In a world where we are starting to communicate with robots more than humans sometimes, bots can be convenient. The problem is, in situations where money is involved, they can do things like apply price-raising penalties you dont deserve to your insurance policy (of course anyone would be thrilled if the unlikely opposite happened).

The chance of AI screwing up could mean huge consequences for a company everything from fines to lawsuits. With thinking robots come robot ethics. Youre probably wondering why unethical choices cant just be eliminated completely. They would happen in an ideal sci-fi world, but the scientists believe that the best which can be done is limiting the percentage of unethical choices to as few as possible. There is still the problem of the unethical optimization principle.

If an AI aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk, as the team describes the principle. It isnt that robots are starting to turn evil.

The AI actually doesnt make unethical choices consciously. Were not at Westworld levels yet, but making a bot less likely to choose wrong will make sure we don't go there.

Excerpt from:

AI likes to do bad things. Here's how scientists are stopping it from scamming you - SYFY WIRE

Google is using AI to create stunning landscape photos using Street View imagery – The Verge

Googles latest artificial intelligence experiment is taking in Street View imagery from Google Maps and transforming it into professional-grade photography through post-processing all without a human touch. Hui Fang, a software engineer on Googles Machine Perception team, says the project uses machine learning techniques to train a deep neural network to scan thousands of Street View images in California for shots with impressive landscape potential. The software then mimics the workflow of a professional photographer to turn that imagery into an aesthetically pleasing panorama.

Google is training AI systems to perform subjective tasks like photo editing

The research, posted to the pre-print server arXiv earlier this week, is a great example of how AI systems can be trained to perform tasks that arent binary, with a right or wrong answer, and more subjective, like in the fields of art and photography. Doing this kind of aesthetic training with software can be labor-intensive and time-consuming, as it has traditionally required labeled data sets. That means human beings have to manually pick out which lighting effects or saturation filters, for example, result in a more aesthetically pleasing photograph.

Fang and his team used a different method. They were able to train the neural network quickly and efficiently to identify what most would consider superior photographic elements using whats known as a generative adversarial network. This is a relatively new and promising technique in AI research that pits two neural networks against one another and uses the results to improve the overall system.

In other words, Google had one AI photo editor attempt to fix professional shots that had been randomly tampered with using an automated system that changed lighting and applied filters. Another model then tried to distinguish between the edited shot the original professional image. The end result is software that understands generalized qualities of good and bad photographs, which allows it to then be trained to edit raw images to improve them.

To test whether its AI software was actually producing professional-grade images, Fang and his team used a Turing-test-like experiment. They asked professional photographers to grade the photos its network produced on a quality scale, while mixing in shots taken by humans. Around two out of every five photos received a score on par with that of a semi-pro or pro, Fang says.

The Street View panoramas served as a testing bed for our project, Fang says. Someday this technique might even help you to take better photos in the real world. The team compiled a gallery of photos its network created out of Street View images, and clicking on any one will pull up the section of Google Maps that it captures. Fang concludes with a neat thought experiment about capturing photos in the real world: Would you make the same decision if you were there holding the camera at that moment?

See the article here:

Google is using AI to create stunning landscape photos using Street View imagery - The Verge

Reduce background noise in Microsoft Teams meetings with AI-based noise suppression – Microsoft

Whether it be multiple meetings occurring in a small space, children playing loudly nearby, or construction noise outside of your home office, unwanted background noise can be really distracting in Teams meetings. We are excited to announce that users will have the ability to remove unwelcome background noise during their calls and meetings with our new AI-based noise suppression option.

Users can enable this helpful new feature by adjusting their device settings before their call or meeting and selecting "High" in the "Noise suppression" drop-down (note this feature is currently only supported in the Teams Windows desktop client). See this support article for details about how to turn it on and more here: https://aka.ms/noisesuppression.

Our new noise suppression feature works by analyzing an individuals audio feed and uses specially trained deep neural networks to filter out noise and only retain speech. While traditional noise suppression algorithms can only address simple stationary noise sources such as a consistent fan noise, our AI-based approach learns the difference between speech and unnecessary noise and is able to suppress various non-stationary noises, such as keyboard typing or food wrapper crunching. With the increased work from home due to the COVID-19 pandemic, noises such as vacuuming, your childs conflicting school lesson or kitchen noises have become more common but are effectively removed by our new AI-based noise suppression, exemplified in the video below.

The AI-based noise suppression relies on machine learning (ML) to learn the difference between clean speech and noise. The key is to train the ML model on a representative dataset to ensure it works in all situations our Teams customers are experiencing. There needs to be enough diversity in the data set in terms of the clean speech, the noise types, and the environments from which our customers are joining online meetings.

To achieve this dataset diversity, we have created a large dataset with approximately 760 hours of clean speech data and 180 hours of noise data. To comply with Microsofts strict privacy standards, we ensured that no customer data is being collected for this data set. Instead, we either used publicly available data or crowdsourcing to collect specific scenarios. For clean speech we ensured that we had a balance of female and male speech and we collected data from 10+ languages which also include tonal languages to ensure that our model will not change the meaning of a sentence by distorting the tone of the words. For the noise data we included 150 noise types to ensure we cover diverse scenarios that our customers may run into from keyboard typing to toilet flushing or snoring. Another important aspect was to include emotions in our clean speech so that expressions like laughter or crying are not suppressed. The characteristics of the environment from which our customers are joining their online Teams meetings has a strong impact on the speech signal as well. To capture that diversity, we trained our model with data from more than 3,000 real room environments and more than 115,000 synthetically created rooms.

Since we use deep learning it is important to have a powerful model training infrastructure. We use Microsoft Azure to allow our team to develop improved versions of our ML model. Another challenge is that the extraction of original clean speech from the noise needs to be done in a way that the human ear perceives as natural and pleasant. Since there are no objective metrics which are highly correlated to human perception, we developed a framework which allowed us to send the processed audio samples to crowdsourcing vendors where human listeners rated their audio quality on a one to five-star scale to produce mean opinion scores (MOS). With these human ratings we were able to develop a new perceptual metric which together with the subjective human ratings allowed us to make fast progress on improving the quality of our deep learning models.

To advance the research in this field we have also open-sourced our dataset and the perceptual quality crowdsourcing framework. This has been the basis of two competitions we hosted as part of the Interspeech 2020 and ICASSP 2021 conferences as outlined here: https://www.microsoft.com/en-us/research/dns-challenge/home/

Finally, we ensured that our deep learning model could run efficiently on the Teams client in real-time. By optimizing for human perception, we were able to achieve a good trade-off between quality and complexity which ensures that most Windows devices our customers are using can take advantage of our AI-based noise suppression. Our team is currently working on bringing this feature also to our Mac and mobile platforms.

AI based noise suppression is an example of how our deep learning technology has a profound impact on our customers quality of experience.

View original post here:

Reduce background noise in Microsoft Teams meetings with AI-based noise suppression - Microsoft

This know-it-all AI learns by reading the entire web nonstop – MIT Technology Review

This is a problem if we want AIs to be trustworthy. Thats why Diffbot takes a different approach. It is building an AI that reads every page on the entire public web, in multiple languages, and extracts as many facts from those pages as it can.

Like GPT-3, Diffbots system learns by vacuuming up vast amounts of human-written text found online. But instead of using that data to train a language model, Diffbot turns what it reads into a series of three-part factoids that relate one thing to another: subject, verb, object.

Pointed at my bio, for example, Diffbot learns that Will Douglas Heaven is a journalist; Will Douglas Heaven works at MIT Technology Review; MIT Technology Review is a media company; and so on. Each of these factoids gets joined up with billions of others in a sprawling, interconnected network of facts. This is known as a knowledge graph.

Knowledge graphs are not new. They have been around for decades, and were a fundamental concept in early AI research. But constructing and maintaining knowledge graphs has typically been done by hand, which is hard. This also stopped Tim Berners-Lee from realizing what he called the semantic web, which would have included information for machines as well as humans, so that bots could book our flights, do our shopping, or give smarter answers to questions than search engines.

A few years ago, Google started using knowledge graphs too. Search for Katy Perry and you will get a box next to the main search results telling you that Katy Perry is an American singer-songwriter with music available on YouTube, Spotify, and Deezer. You can see at a glance that she is married to Orlando Bloom, shes 35 and worth $125 million, and so on. Instead of giving you a list of links to pages about Katy Perry, Google gives you a set of facts about her drawn from its knowledge graph.

But Google only does this for its most popular search terms. Diffbot wants to do it for everything. By fully automating the construction process, Diffbot has been able to build what may be the largest knowledge graph ever.

Alongside Google and Microsoft, it is one of only three US companies that crawl the entire public web. It definitely makes sense to crawl the web, says Victoria Lin, a research scientist at Salesforce who works on natural-language processing and knowledge representation. A lot of human effort can otherwise go into making a large knowledge base. Heiko Paulheim at the University of Mannheim in Germany agrees: Automation is the only way to build large-scale knowledge graphs.

To collect its facts, Diffbots AI reads the web as a human wouldbut much faster. Using a super-charged version of the Chrome browser, the AI views the raw pixels of a web page and uses image-recognition algorithms to categorize the page as one of 20 different types, including video, image, article, event, and discussion thread. It then identifies key elements on the page, such as headline, author, product description, or price, and uses NLP to extract facts from any text.

Every three-part factoid gets added to the knowledge graph. Diffbot extracts facts from pages written in any language, which means that it can answer queries about Katy Perry, say, using facts taken from articles in Chinese or Arabic even if they do not contain the term Katy Perry.

Browsing the web like a human lets the AI see the same facts that we see. It also means it has had to learn to navigate the web like us. The AI must scroll down, switch between tabs, and click away pop-ups. The AI has to play the web like a video game just to experience the pages, says Tung.

Diffbot crawls the web nonstop and rebuilds its knowledge graph every four to five days. According to Tung, the AI adds 100 million to 150 million entities each month as new people pop up online, companies are created, and products are launched. It uses more machine-learning algorithms to fuse new facts with old, creating new connections or overwriting out-of-date ones. Diffbot has to add new hardware to its data center as the knowledge graph grows.

Researchers can access Diffbots knowledge graph for free. But Diffbot also has around 400 paying customers. The search engine DuckDuckGo uses it to generate its own Google-like boxes. Snapchat uses it to extract highlights from news pages. The popular wedding-planner app Zola uses it to help people make wedding lists, pulling in images and prices. NASDAQ, which provides information about the stock market, uses it for financial research.

Adidas and Nike even use it to search the web for counterfeit shoes. A search engine will return a long list of sites that mention Nike trainers. But Diffbot lets these companies look for sites that are actually selling their shoes, rather just talking about them.

For now, these companies must interact with Diffbot using code. But Tung plans to add a natural-language interface. Ultimately, he wants to build what he calls a universal factoid question answering system: an AI that could answer almost anything you asked it, with sources to back up its response.

Tung and Lin agree that this kind of AI cannot be built with language models alone. But better yet would be to combine the technologies, using a language model like GPT-3 to craft a human-like front end for a know-it-all bot.

Still, even an AI that has its facts straight is not necessarily smart. Were not trying to define what intelligence is, or anything like that, says Tung. Were just trying to build something useful.

See the article here:

This know-it-all AI learns by reading the entire web nonstop - MIT Technology Review

Facial recognition needs auditing and ethics standards to be safe, AI Now bias critic argues – Biometric Update

The artificial intelligence community needs to begin developing the vocabulary to define and clearly explain the harms the technology can cause, in order to reign in abuses with facial biometrics, AI Now Institute Technology Fellow Deb Raji argues in a TWIML AI podcast.

The podcast on How External Auditing is Changing the Facial Recognition Landscape with Deb Raji, hosted by Sam Charrington, who asks about the genesis of the audits Raji and colleagues have performed of biometric facial recognition systems, industry response, and the ethical way forward.

Raji describes her journey through academia and an internship with Clarifaito taking up the cause of algorithmic bias and connecting with Joy Buolamwini after watching her TedTalk. The work Raji did with others in the community gained prominence with Gender Shades, and concepts that emerged from that and similar projects have been built into engineering practices at Google.

Facial recognition is characterized as very immature technology, which was exposed as not working by the Gender Shades study.

It really sort of stemmed from this desire toidentify the problem in a consistent way and communicate it in a consistent way, Raji says of the early work delineating the problem of demographic differentials in facial recognition.

Raji won an AI Innovation Award, along with Buolamwini and Timnit Gebru, for their work in 2019.

The problem was hardly understood at all when Raji first began bringing it up, and even know seems to be fully comprehended by few in the community, as Raji says is demonstrated by a recent Twitter argument between Yann Lecun and Gebru. Raji comments that the connection between research efforts like Lecuns and products should be very clear to him. Raji also pans his downplaying of what she calls procedural negligence by not including people of color in the testing.

Representation does not necessarily mean that the training dataset demographics mirror the society the model is being deployed in. Raji notes that if 10 percent of the people in a certain area have dark skin, then models used there need to be trained with enough images of people with dark skin to ensure that the model works for that 10 percent, which may be a much higher ratio.

Raji also talks during the podcast about how the results of the follow-up testing shows the need for targeted pressure to force companies to address the gaps in their demographic performance. The limits of auditing are also explored in the conversation.

The need to have information specific to implementations is discussed in the context of facial recognition for law enforcement uses, and suggests it should be taken off the market in the absence of that information.

Raji says that as some facial recognition systems have reduced or practically eliminated demographic disparities and other accuracy issues, the problem of its weaponization has become more pressing. She notes that people are careful with their fingerprint data much more than facial images. In addition to misuse by law enforcement, sometimes out of ignorance about the technology and sometimes deliberate, Raji says the weaponization of the technology in deployments like the Atlantic Plaza Towers in Brooklyn.

The bias issue exposes the complexity of the issue, and the myth that facial recognition is like magic, Raji suggests. While the necessary conversations are held, the technology should not be used, according to Raji. To make it safe, Raji suggests that technical standards like those supplied by NIST need to be supplemented with others that include considerations of ethics like those produced or discussed by ISO, IEEE, and the WEF.

Though Raji presents the problems she is concerned with as systematic, she acknowledges the benevolence of some facial recognition algorithms.

No-ones threatening your Snapchat filter, Raji states.

accuracy | AI | AI Now Institute | artificial intelligence | biometric identification | biometrics | biometrics research | ethics | facial recognition

View post:

Facial recognition needs auditing and ethics standards to be safe, AI Now bias critic argues - Biometric Update

Watson Won Jeopardy, but Is It Smart Enough to Spin Big Blue’s AI Into Green? – WIRED

vF0;0{DRM{(bv[>Sn}=$*(;}}}yL" IX${FFg?2]=+43/O1gn/y8{']:s0a5|#?AC!=Nh s}w C{W>^GqZy{Wup,zYnw{wGi;h2;OoK* O`8sig(]~p;&Q+_',C$uu+eGo[ZkhLs]BHK.>2>X'q>CYx/Zu@4BD?fm`vB,_k[V%/&{b*BP&$G+lpCzQraOpgd63]GaY^6BeLewbY=ckF "I<@8P$t=)~uFbPw%:u?L_ WFng5jCclO'+)$Pz)@@R -!~V!P#n`y{IPbe0JAYl[W^.*HuZu9db]-?/sVQfD7&e#:2av[ wr*/tYJ;WQL~[aY,Hh-V |GA/Y>HwHl /AX:7>^_!g mz~:>2o7X%I^CjHG(zDM @OEc>Rv!&&4O]gL1 'lKk ~:&]}CL=og!GW,~z"|@yxG6X c6"7[?8:7TWw _=:`.tz1.3?W|gP {_hlfIerv@O~02[yFH}=rS/=04:`5x_]KwY[_a}ld|HaIwWl_{JhDFay8EwG.GUv'zGA#^i9ic`x>">C3G=-Z]&$9DT^J@m@F' D5"U@&:RDvAgq/ $ ,JZ'Tv$n_uv/:;H{#eu]wOb[ !o!4faa) PH;o:&tmP6h 4F%00*o|_zzlHf8!yJz lx^Vh1#FBp,^4#%Gx OPQ6BM_0Km(@=l"z ?b`|W6:q9ROM UWK7) h4MZ:ojaM.`4whR+6uD9u:2IHx = a.L[5sB]iT1JnVm2QNdyuEFLfUhBS%F0-cj3WZaSQ[aSwEL'}2Jso:E513r3fWR['oi25T,0iz,jI:L..?i=A7|gjP.?q*I44x 6-Dg'}>Wl>n]kjQWx"~X%jN/wc{-3|B|q22Y .d2m6e+<~Kn;vn*^:9lq=[6vwvuwRZLE@.mcFyS,wLrx-0U2v.qv;>u/O`i<9V*PB" d:@ML};<>O`.c{]#&P+W$h`bt)QI.!5A/b~m=&_qj>B~{$ ?ic5JW^9pQ -eaE ^)AebJk)T[;80Hy.e IQz:`IKBF=njNKkqX}'9zz/H%'V;0H3w|o5=UL"ik*yd[yUV]/|s[N, A^, wW?%!9[Ry3K?Z'gq~A&') ~[9sV- Q.%"m{~Bspq|r/Qu/-Y<3@iP &=HcO*wv'O(M6Id#PDgC@*m}_3$>(+L~L$(AD"8]DIh|W>;D8m|jJqDC[ 9EN${s$"R~.u:`G!tV G^9 kvP b4:R<0!D60yD"@$Z$pb4}Djmm.!}^dB= ,v1Pt;~(Tsqw:xlB-jPl=-42ZJsA',Qp5"RieL(`R}'j~O=Aniflc>HxtK5I/=H{4SBz>Vz(y]~VU-g~P-gHxtK=v)rEJ){>QC]I~i~p|#57Mj:G,%^}^a?GX##Fc%7^O]-gFTM~(rqj)6GM%fq?43-q?2~+iVc3j>Gm[ "zU~VUhPi~>C5]m #5G} q?x' 1V[K~XM~(RKK=q?x~$:>3'>3)rEN[lOOL~n*W]~V+ub+Txt$cM[l2TUU~lZM|&j7GMfJk ]=&xS=Wq/c~S|#j_{=U|~|+MGUjc1(j)6GMgJ[}b~Pb]mf]Y?*3K)q/1Y?1PMjc[l6V*:U_^y/b_b_L{>`p:|6-5~.||e`_Pwg~:4.> Z-N6u;H=-?x}#Lt[x,pte?hf=~|UOUGM6&k>!MyaYpNQTHuwR]<[t~q:Q0e~:TciN&ooV|Z&L6@Q>S~$>J?MRTJYw>gl6*TwXwoPD5shn> l@lydGWz~'M1!q{bt3%$E+@I?(%nYueUSZWY/&;_8UR sesR[{%&ZgSK+C62DtX@#E{ En],Z!)e)8k]E%:(_k&iP./gutk4Pd[d.l`e x[M<; 4@*9RY2z|z|y7qjE(lU,-&$Y7z7{lj"c;kYXU2nYgV[HZeYgYYgYffEP$vYgjYx9N5,D5,A5bG:(Lzuju+P.PC5:?PC5:?PN f]7ujuAYf]'u]juAYf]'u]juAYfCn5:?PC.5:?PC5:?PCN5:?Pn;u?fv5YC? f jmPnQ5,{:kO>Q:nkY>5qu$e`%}P?rwh*YC.Pkh]jV]rcc `I<_x]Cd 4,i|dju_:QuM*KZ IWC2Ls]rRbmxq& Fg.l C~v6%[Y^<:+LAXWpf"Uy[kD:` 20@!YF+IuH!:96-P.,*|D"O.ArlXr6@"(V&V(Z'+]Pk< ?tZ#oU#.lo8 z@ W]`K `d@e"t0*QH8;kY))k;`mUW`*L< wAi["}L"*qzo;Iv#h){<0I!`0Jh|_dc,BFp!{N+/^<9Ku$[9 i91`u'Dqa>/~4"9z.UAT} 5.Sy%dtAXe3Xl8 Ai$9Jxv"*4AL oGtW[J}gf$z Tx"$yr'q au2K'_2`,XXcZbd{PL1Ie wo3 3eH3aZ/[X9B~9<:Bht09_X%>%j%Vhl*VS*_ E c8:vF YQVmu*WJHlL%v"j z~Rfui.95QLG"F@X[u!`76%_+x6z5,f}mt+ M/Y3X2-FDgD*SU~O&? K}Yj.S~06'xjADn ,(Zem^_foev>+~9Q;rP[/&$MISfJMO6Um'x&7y rp|IpPzi |JTu=Y^C|,b;=hC/sS29k.xa cx~"6XDkXAyDf"6Yj0#^=M)E q_F t?>g[bbk<`9< AZnE(f[HFraZ:Lq k1'Z&%0-M+1dZ <=ia+dU"0u7"yW8p+VM*5BI/B ?|&olfAa;p8~^q*)l(P7QyQ4lEC1yFJ5702 pJ(fT, Mj$@z|xM@<.+rwFA,c&D-S.G|8OcT'+=*(b_O6c1s2r]upc.`>Xx:Gaia)%E57$$P-D. )`m{iUC|]>4 1 sV"auvQ%9w@z .DPx|tC$?wDf1m](}?B#?2]h!G#BvL ") g e,M"

+]%]SaHULqvIg0@D~F1I1K'J4)mT3- $1#+CmOHHQJ;F~hn ClR J8/``^f='x#YhS. )R2i||#ER !OE%7l u945DtiaI>G6nDo8&7s #yDme y=J -bHn`{&i@-R"`@"0Dn3hHki$BJ&0MipoAK9G^{>&6 EDX!nl|(A ^R50aIrrFXq5_Cnq}HOAx!fK>2%I|oTZ"/pE,-k#h!D@6s<=TAe" 9 XUUBIDVJbP0H KSEd>d8KB`O+e9K7L{ &IKf9W zyJ#C(/Q jb@ L'8Pid+bJ`c#/6{+C8mBUD3s) %e[7 4//2Dw@PrCJ; B,KixQ0hTC?9 K2z,+ />X1?LZ r6h,J~ehJ]+U!w$<&^9ahE~(: tBn:tVW)N|Ch-fR llq !ko@Po;`,-2/fJSM*wdR@O$aM/t:jm"BwC2lgh&R6n~kCZ|Ev/pxzcDt2jd[&:7==nxU_ZjS2zV.S7#IYv^d?E=yu( = tNRsSn1D ]t#FCVf0u5K3N`(Dj3xE>>:H6rr6tdh]w"g$6j*l:10w6S~xI{lVr>""z(|nv;+x`?d j%/ dR^s@hN7l/[H'@Vf9G} c|D<2Z.1mcwoDcR_v)F4}EmfE+'Ls2SIc'}c"m5{U#;h;4Dwmwqq>&' ML#mo~1)-^iXy>UGgJ+P(YmezU}hkegJ?QK[$t$Lj2LC]0'8":>r@WWi oq=Sn[_n JP{}re6pjrOw+! 1"!33;:=_maFV2ro'vKA yfCdlHwfCLb+L"mio%m%0 s{{(N{Lh7 _o9/>,in4-t [4cV(oH)> 8eX=a6lf/NW>df&(m:sz?Y9}:;$$iG+N["v4q7ksF+E=Ci0Ky2o#,UPNY/#,Z+RA)"bJpEyqpb[$cH 8!F.7k9tmJfK9=9`(3v4z5'|*tr<*65XmWYVQ|[bSVh^4"n T8GQz 'dwqi-Z>cd'KC9Bv6:`N Vz.ol*e}i/EjDp-|a| H 2LT4Vm4cN363aDyCFr-a>d4#u. d,z )>:ddCFR{P4A*/@h!GVtVcIQPjZ(FUAEdFW2h*Lh~R-4dIRM0|K& ]YsNm@~Ed,We[RY#*KD|}e}}u46`e}GfGM;epd[PjN>v_eX5K}D';)J%~'etf=vfjs5l^a70hu]Y1v*a/h'6,cj<6?eZO0@Nw|UGG].]$y]abJ%7 Use(Ju1..M}Mz%cL[0/xOa?2i>rJ-VjiL:o6-t *$ K ~_,h+e=2q1f}KfhqK , t7 0|Cdq%|ZwN]3={E _#Unbhrd"*V3?R~$Vz"g,T7m!Y7oNh:j52kusXggamz9t}B)t|'zI47homf|E5p?x-f+nvUho<>c]'Fm~$AG$&oB= ZNR's1qdj9N&}+7/%O0c#y~l)y$b`T{xyS~K_no % *HmJiR%}KDfBSgkxTmX$d1daRS4w'3&cmq'&{QM/D?d,M,r2~OW-n>5[=T(LtiT H!=H3,W+VXwe`Y:clk6}88tz -;hxv./I'oOgtZeZr>)O>^GX<$u]*K.oYbwu1!"r|cx3iE@YO5t%|+OEdw[S}.[z?_8QJw%'v& |jcn~RcN"ATt'K/|q*^S5ZS9ney / lwP'Lm^NuxZPA3&G:w~>p#j^6K VhqWr0h3J$+_XC5 64oiv4C:8 HU(n0KpkQ. x*b7`he&TmqvlTp/]!j!hP16x:rbK.:PY2P":RZ9<{-;l|cX_{s[|dtDOOL=-Cxo|p^.]Q_oUtLIK~75+Rt;v>Zf9M+h'&'5#] K+d7w,G0'xZ($[%[Ub6SZ5.J29RBVbXpi5l+wXszTE-CLh$_&}Omjc.ETjc[]|Jb'9zz/Vctr}$ :^VoGak]L%nK~XFd+o+VZzRTmsU^& *tRzM+JH"Qg-uY9 BBU^Q&)%c2[5<%XQY3BkgPO{tN{BfV}r 5e7 xT%Cor,zM:6AoNk_Y LX`np7Bwo$ $aEVF^.|...{yd)^{X<[,HE|U{ %#}AbffP%N]1pi3YxhCTyeS?QU`clp|_q#K[k*0 NyXD82g(Fk*2IIuA7wAX]Ls2'&E("_peF}3%<]e(9*=^n[LPP3~~'?Gr|q4P]^{ fyP59 2e'Xu5+`QAn0@?'T']zNCPwi`uB`Tx!zHvo3 fi8{qW-S ?}@QaM7jSNT!<#Qo[,Ql"OZ>'?Y8$S.m0J5hf]`{2w LZI H.)kc|1e8Yx)@dZ$/MhQo(n49[&O f NgSuFdM0`GF3XwGhiO'|Yo|43,s`cE1;!7=Us<|bm;;6ls:2@V*TO %Q>IFe}KA/R}uHp!XW.P|_`t 6zZjmBX/B~Vv0zVtEM*-ogC$lZ0;~Ix`}I>#/#zc 9H#| k ol~c[F3ptxZu(d`~/Ms +YF5C0HCc, "6 Zd3(G[!?dH? i+Ou!4:*c?[b~[&qEuhy~j6xM HU(&r6Oaau:?j4>s+ IB]Ram+1kEiRf3yFCpyX`O(Kl{N_/0%Go]<=U3a355>nv&0S;WYUgk_t!C

View post:

Watson Won Jeopardy, but Is It Smart Enough to Spin Big Blue's AI Into Green? - WIRED

Why China’s Race For AI Dominance Depends On Math – The National Interest

THE WORLD first took notice of Beijings prowess in artificial intelligence (AI) in late 2017, when BBC reporter John Sudworth, hiding in a remote southwestern city, was located by Chinas CCTV system in just seven minutes. At the time, it was a shocking demonstration of power. Today, companies like YITU Technology and Megvii, leaders in facial recognition technology, have compressed those seven minutes into mere seconds. What makes those companies so advanced, and what powers not only Chinas surveillance state but also its broader economic development, is not simply its AI capability, but rather the math power underlying it.

The race for AI supremacy has become perhaps the most visible aspect of the great power competition between America and China. The worlds dominant AI power will have the ability to shape global finance, commerce, telecommunications, warfighting, and computing. President Donald Trump recognized this last February by signing an executive order, the American AI Initiative, designed to protect U.S. leadership in key AI technologies. In just a few years, American corporations, universities, think tanks, and the government have devoted hundreds of policy papers and projects to addressing this challenge.

Yet forget about AI itself. Its all about the math, and America is failing to train enough citizens in the right kinds of mathematics to remain dominant.

AI IS not simply a black box that will grow if unlimited funds are poured into it. Dozens of think tank projects and government reports wont mean anything if Americans cant maintain mastery over the fundamental mathematics that underpin AI. Calls for billions of dollars in related investments wont add up without the abstract math ability needed to transform the economy or military.

What we call AI is in fact a suite of various algorithms and distinctive developments that draw heavily from advanced mathematics and statistics. Take deep neural networks, which have understandably become a CIO/CTO buzzword, as an example. These are not artificial brains. They are stacks of information-transforming modules that learn by repeatedly computing a chain of what are known as gradients (something rarely taught in high school calculus), which are the backbone of a family of algorithms known as backpropagation.

Similar dissections can be made for all of machine learning, which is a study of how to program computers to learn a task rather than execute a rigid pre-coded one. The ability to rapidly classify massive amounts of data, identify patterns, predict outcomes, and self-learn, all comes down to ever more sophisticated algorithms paired with increasingly powerful computational power and a commensurate amount of data.

From iPhones to Summitthe worlds most powerful supercomputer, located at the Oak Ridge National Laband from Google to Facebook, these computing platforms and programs use incredibly complex mathematical calculations to do everything from model nuclear detonations to provide web search results.

And contrary to what some prominent AI advocateslike Kai-fu Lee, author of the AI Superpowersargue, its not simply all about data. Lee is famous for saying that, today, data is the oil of the early twentieth century, and that China, which has the most data, is the new Saudi Arabia. Yet without the right type of math, and those who can creatively develop it, all the data in the world will only take you so farand certainly not far enough into the future AI advocates boldly envision.

That is why cutting-edge mathematics focuses, among other things, on being able to work with partial information loss and sparse data, or to discard useless information that is collected along with the core data. No matter how you slice it, the world runs on ones and zerosand on the white boards where the algorithms that manipulate them are thrashed out. Yet one cant simply jump into creating more powerful and elegant algorithms; it takes years of patient training in ever more complex math.

Unfortunately, American secondary school and university students are not mastering the fundamental math that prepares them to move into the type of advanced fields, such as statistical theory and differential geometry, that makes AI possible. American fifteen-year-olds scored thirty-fifth in math on the OECDs 2018 Program for International Student Assessment testswell below the OECD average. Even at the college level, not having mastered the basics needed for rigorous training in abstract problem solving, American students are often mostly taught to memorize algorithms and insert them when needed.

This failure to train students capable of advanced mathematics means fewer and fewer U.S. citizens are moving on to advanced degrees in math and science. In 2017, over 64 percent of PhD candidates and nearly 70 percent of Masters students in U.S. computer science programs were foreign nationals, and fully half of doctoral degrees in mathematics that year were awarded to non-U.S. citizens, according to the National Science Foundation. Chinese and Indian students account for the bulk of these, in large part because the most advanced training in American universities still outstrips that in their home countries, though the gap is closing with respect to China. Yet that also means that the majority of those being prepared by U.S. universities to open new frontiers in computer science and abstract math are not Americans. Some of these non-citizens will stay here. But many will return home to help grow their countries burgeoning tech industries.

There are good reasons to argue that U.S. visa restrictions on skilled workers should be eased, tempting more of those foreign nationals to stay in the United States after their studies have been completed. But the bottom line is that not enough American citizens are choosing to major in advanced math, which has corresponding implications for everythingfrom foreign competition to Silicon Valleys startup culture, from national security concerns to whether or not U.S. corporations consider themselves American.

AMERICAS SELF-INFLICTED math wounds matter because the Chinese Communist Party has made global AI dominance a national goal by 2030, and is leveraging its resources to make it so. Indeed, the world now sees the battle over AI as a battle between China and the United States. Under General Secretary Xi Jinping, China has invested heavily in AI-related technologies, making it a core focus for the modernization of Chinese industry. This effort underpins Beijings Made in China 2025 initiative, which seeks to make the country dominant in most high-tech processes.

Chinas AI market is now estimated to be worth around $3.5 billion, and Beijing has set a goal by 2030 of a one trillion yuan AI market ($142 billion). The government has pledged the equivalent of $2.1 billion to build an AI industrial park outside Beijing, among other major investments. Leading the effort is Huawei, which has established AI research laboratories in London and Singapore, unveiled a new generation of AI processor chips, and laid out an all scenario AI strategy.

Much of Chinas spending is directed towards facial and voice recognition technologies like those of Megvii and SenseTime, along with natural language processing. The focus on these particular technologies is purpose-driven: Beijing is using its countrys facility in applied mathematics and AI, whether honed in America or at home, to create a digital surveillance state that is unrivaled in history. For example, a new law requires all individuals registering new mobile phone numbers to have a facial scan. The worlds most advanced algorithms are being used to aid in monitoring and controlling Chinese society and bolster the countrys security services.

Some of this is already plainly visible. Beijing is notoriously creating a social credit system based on facial recognition and other technologies that rewards or penalizes certain behaviorjaywalking, credit unworthiness, insufficient patriotism, and the likeso as to shape individuals private and public behavior. The two far western provinces of Xinjiang and Tibet have become virtual police states within China, as their Uighur Muslim and Buddhist Tibetan populations are ceaselessly monitored and controlled through the application of facial recognition and forced DNA collection.

Chinas AI focus has global security implications as well, given Beijings military-civil fusion policy which mandates that all high-tech advancements be made available to the Chinese armed forces for incorporation in weapons systems. Just as insidiously, Beijing is reportedly recruiting the countrys smartest high school students to train them as AI weapons scientists. A recent National Science Foundation report noted that Chinese government policies do not share U.S. values of science ethics, raising concerns over U.S.-trained Chinese scientists employing advanced research that benefits the CCP s surveillance state and military.

Even as Chinas AI industry works to catch up to its American counterpart in terms of talent, the country is investing in its mathematical ability. Chinese students ranked number one in the world in math (as well as science and reading) in the latest pisa tests. While there is good reason to doubt the veracity of at least some of the Chinese scores, there is no question that China is focusing heavily on STEM education, outstripping America and European nations. The recently announced Strong Base Plan will recruit the countrys top students to study mathematics, as well as physics, chemistry, and biology, among other fields.

Read this article:

Why China's Race For AI Dominance Depends On Math - The National Interest

How this philosophy graduate became interested in the world of AI – Siliconrepublic.com

Iva Simon Bubalo talks about her career path to date, from studying philosophy to wanting to build tech solutions.

Iva Simon Bubalo is about to start studying AI, but took a very winding path to get there. While she is currently working as an analyst in the technology sector in Dublin,she started out studying philosophy eight years ago.

At that point, Simon Bubalo was curious about the human mind, she tells me, and how certain ideas such as justice, fairness and freedom influenced the course of history.

I vividly remember one of the first Introduction to Philosophy classes where the professor asked: So, you want to become professional thinkers? Why?

The room was suddenly filled with differences between the status-quo and what things ought to be, different value systems, ethical considerations of right and wrong, talented beginner impact assessments and the efforts to identify what is essential, she says. In order to be able to do those things effectively, our minds needed to train rigorously in one specific tool, and that was logic.

Simon Bubalo now has a masters degree in philosophy and language studies, but her next step will see her take part in the masters course in computer science and AI at NUI Galway.

So, how does philosophy compare to the world of tech and AI? According to Simon Bubalo, there are a lot of touch points between the two, particularly around how the human mind and reasoning work. The key, she says, is that philosophical thinking brings disruption.

Philosophy is a unique primer for analytics, project and stakeholder management, she explains, as it involves an understanding of human rationale, development cycles, and identifying needs and wants.

Being able to ask the right questions, form the right hypotheses, consider the problem from multiple angles, identify the root cause, lead or follow the argument and instil a sense of purpose in the group are all soft skills valuable in any business setting, she says.

When we do these workshops and design-thinking sessions, I realise that this is a skill that I am familiar with from my from my humanities background IVA SIMON BUBALO

She also saw parallels between the two fields in terms of exploring basic elements of reasoning, decision making and problem solving.

How to formalise a thought, structural relationships between concepts, assumptions and implicatures in human language, and creating a general world view for machines seemed to be a common and very fertile ground for collaboration between the two disciplines, she says.

In AI, we still dont have a solid, unified definition of intelligence, while were working with this notion every day of its development. Thats what drew me to the field initially, but I also saw that there are common areas.

When she first began working in tech, Simon Bubalo was surprised by how much her background in logic helped.

[It] helped me to understand, for example, database design, she says. I was properly surprised at how much the two areas are linked, where you have to kind of understand relational models and things like that.

She was also surprised by the relevance of critical thinking and design thinking the kind of soft skills that you really developed studying philosophy.

I managed to find a lot of use cases also for stakeholder management when we were taking requirements from a stakeholder trying to understand how to build a product or report best, according to their needs, she explains.

And when we do these workshops and design-thinking sessions, I realise that this is a skill that I am familiar with from my from my humanities background. To actually understand human needs and wants is basically what philosophy has been studying.

But why did Simon Bubalo move from philosophy to the field of technology in the first place?

The building aspect drew me to it, she says. I wanted to build something in philosophy. You know, its a theoretical discipline and its very rigorous, but it is building its building culture, its building societies, its building what I think the product of humanity is: our mentality.

So what actually drew me to technology was that element of problem solving and its application in the world. Building some products that solve societal issues.

She says that making the leap from philosophy to technology was scary at the beginning, as she took on a course in data analytics while also working the field. But Im just kind of a nerdy person. So I was really excited, too, she adds.

I suppose you have a lot of pressure if youre running projects at work and you have deadlines, but you also have deadlines for college. It can get difficult. Its really important to have support in this case, such as taking days for study leave and getting exposure to people who are already in the analytics field.

I can imagine if I didnt have that kind of support, it would have been very difficult.

As a woman in technology, especially one who started out in a non-STEM field, Simon Bubalo says that her support network at Women in AI Ireland has been critical.

Joining Women in AI Ireland was pivotal in a sense that I was suddenly exposed to so many learning opportunities and empowered to follow my path to specialise in AI technology, she says. That feeling is truly empowering.

The group launched last year with the goal of increasing the number of women in AI. Its led by Alessandra Sala, who is the head of analytics research at Nokia Bell Labs. After attending her first Women in AI event, Simon Bubalo joined as a committee member and contributed to organising the next event.

In the future, I wish to see more women from all different backgrounds, especially humanities and social sciences, and underrepresented minorities find their place in Irelands data science and AI ecosystem.

Simon Bubalo finishes up our conversation with some advice for people thinking about making a big career pivot.

In almost every area, I would say its a matter of being explicit about where you want to go and what you want to do. After that, I find that people are very supportive. I think at the beginning of your career, thats extremely important, to be able to show your curiosity and have people respond and support you.

And for anyone hoping to move into the field of AI especially women she adds: I would advise people to anchor themselves to something that they know, because AI is a highly interdisciplinary field and I feel like people from all different sorts of backgrounds can find something thats related, whether its psychology, agriculture or marketing. I feel like its a technology thats starting to be applied everywhere.

Also, get to know the ecosystem. Its important to have exposure, because you dont know what you dont know. The people around you who are far ahead in the industry can give you guidance.

Having that support of the community is really, really empowering. Having the support of the community of women while being in this tech industry is very empowering. And, you know, I have confidence that if I cant solve something, I know a place where I can go and ask.

Visit link:

How this philosophy graduate became interested in the world of AI - Siliconrepublic.com