Daily Archives: November 25, 2019

I Let AI Choose My Outfits for a Week. Here’s What It Did to Me – VICE UK

Posted: November 25, 2019 at 2:46 pm

Like gourmet food, books, vinyl, whisky, watches, coffee and sex toys, you can now have mens clothing shipped to your door in a subscription box. Specifically, that entails the arrival of a huge cardboard box packed tight with seasonal essentials. This isn't a standard clothing delivery, where you've scrolled endlessly through an online store sale section, then inevitably ended up getting a new, un-discounted drop saved in your 'favourites' at 1.37AM the night before. Instead they're selected, depending on the service you choose, by data scientists, AI and teams of keen stylists.

Think of these boxes as the next base in fashions tech affair. Facebook is developing AI technology, called Fashion +++, to elevate your look; meanwhile, last month, Amazon launched their AI app Style Snap (basically Shazam for clothes). Clearly, the omnipresent, multitudinous algorithm is not content with monogamy. After solidifying relationships with music, film, TV and your newsfeed, its about to fuck with your style.

You could say that these clothing boxes are marketed at the same sorts of people who commit to assembling Hello Fresh recipes past the free trial period. In that way, the boxes aim to provide high quality garms for men who either dont have time to shop, need a stylist, or both. But do they work?

I signed up to three services. First, THREAD, an artificial intelligence start-up that received a $13 million investment from H&Ms venture arm CO:LAB last year. Then Stitch Fix, that, along with picks from stylists, runs suggestions via an algorithm created by their 100+ data scientists. And, lastly, Outfittery, which also uses a combination of artificial and human intelligence to select their clothes.

Each service requires you to fill in a questionnaire. Those vary slightly from brand to brand Outfittery asks if youre looking for stuff for a specific occasion, like a wedding or night out; Stitch Fix wants to know how you commute to work but they all cover the essentials: weight, height, fit.

Youre also asked a few broader style questions. Stitch Fix and Thread both provide images of various fits you tell the service whether youre into them or not and this feeds into the algorithm. The same goes for brand choice. Stitch Fix and Outfittery both fling various logos in your direction (All Saints, Fred Perry, Reebok, etc) to see what you'd want to hold onto. A few bits of detail later and youre all set. The data gets fed into the algorithm and the service designates you a stylist.

THREAD is the most detailed of the three I'm immediately assigned to a lad called Luke, who pops up in my email inbox with a note and headshot so I can see who he is. He tells me he's "written and edited an online menswear journal for the past two years", though doesn't specify which one. The message is also automated, making it feel less human and like a creepy insight into our AI future.

Stitch Fix keeps things low-key. No boys jump into your DMs. Instead you're told who your stylist is in a note accompanying your clothing delivery. In my case, I'd been styled by a woman named Katie though without personal details and an image, it wasn't clear "who" she was. Meanwhile, Outfittery, the most human of the lot, include verbal interaction in their service. AKA, the stylist calls you up to go through some style stuff though this is mostly about you and what you want, than their previous work.

Finally, the items go in the post. Voila!

A bit about me: Im 27, rarely if ever do I dress in anything smart; I like wearing colours, like in the photo above (I'm in green jeans). On an average Monday, though, I want my fit to feel as close to being in bed as possible. I fed a version of this information into each program, asking Stitch Fix and THREAD for a casual look, as well as adding day-to-work wear into Outfittery for some variety.

First up, THREAD. They differ slightly from the other two brands by emailing across a full outfit in advance; you then choose whether to buy or leave it, whereas the other two services don't let you in on what you've ordered until it arrives (though you can send unwanted items back, and only pay for what you keep). My (robot? real?) boy Luke from THREAD had suggested a casual, dressed down lewk.

A casual fit from THREAD

Not my usual bag, Ill be honest? The blue and black in the same fit threw me. Though several men still commit this mistake, its common knowledge these colours simply do not go well together. A quick Google also showed the pair of shoes theyd offered (Cole Leather Trainer from Shoe The Bear) was available from other retailers for 29 less than they cost through THREAD.

Next up: Stitch Fix. Consider them a bit of a Silicon Valley rarity because unlike Spotify, Slack, Netflix, etc, they actually turn a profit. Valued at $3 billion, they launched their UK arm earlier this year. Vogue.co.uk have also interviewed their founder. I was excited.

A look and another look from Stitch Fix; jeans, model's own

Three more Stitch Fix looks

Id completed the 80 questions in Stitch Fix's style survey, pre-fix, and thought the algorithm knew me well. But wuh-oh! Maybe not. Of the items I received All Saints jumper, Lyle and Scott t-shirt, orange shirt, pink jumper and a blue one, from one of Stitch Fixs own brands (both they and THREAD sell in-house items alongside high street names) just one, the leopard print All Saints jumper, fit my style.

A spokesperson says the service gets better with use. ie: you note the items you dont like and why in Stitch Fixs check-out review, then return them free of charge, then book in another fix. With each go, the data you provide helps the algorithm select better items which the stylist then combines into a look.

Round number three: Outfittery, the highest end of all the boxes on offer.

Oi, oi pass the stocks! A fit by Outfittery

Three more fits, by Outfittery

The most muted of the three services, Outfittery looks to be designed with the big boy business man in mind. Just check me out up top in a suit! Due to the phone call and variety of items on offer two full looks: one for work, if I went to that kind of formal-looking work"; one for a relaxing weekend its also the closest of these services to a full styling experience. In a high-flying life, maybe Id wear this stuff.

Dressing yourself is an inherently human experience. We decide not to be naked. We choose what clothes wed like to wear each morning, having already picked them from the shop. So, bringing technology into the picture feels clinical. If youre into fashion, you appreciate small quirks. Like the way someones fit pops, thanks to a splash of colour on their socks. Or a one-off find: whether its a hand-me-down, charity shop choice or high-end sweatshirt available in a limited run.

These services currently do little to satiate this thirst. They, essentially, offer outfits set to a specific archetype guy who likes to wear comfy sportswear on the weekend; dude who thinks wearing faded pink is adventurous; dull tones, upon dull tones, upon dull tones. Blue, black, grey and green.

Spotifys algorithm helped generate a genre of music. Spotify-core, or streambait pop as Liz Pelly defined it in The Baffler , is a type of music that could easily fit on mood-and affect-oriented playlists like Chill Hits, Chill Tracks, or Sad Songs.. Right now, AI powered styling is, in effect, the same thing. Its not pushing the boat out. Its dressing men how theyve always dressed.

For some men, putting on the same three looks as everyone else is OK. They arrive home late, daily, with little time for anything else but a pre-prepared meal and two episodes of Netflix. They read the news theyre given. Perhaps they would like clothes delivered on a monthly or seasonal basis, without the stress of picking them out. Perhaps itll free up their time for more work.

Removing the hours spent looking for clothes is the advantage of these services. Will they make you stylish? Depends how you define looking good. Crucially however, they cultivate a lifestyle where you can return to these boxes again, and again, to feed the big data algorithm. Luxury capitalism, now!

@ryanbassil

Read more:

I Let AI Choose My Outfits for a Week. Here's What It Did to Me - VICE UK

Posted in Ai | Comments Off on I Let AI Choose My Outfits for a Week. Here’s What It Did to Me – VICE UK

Text-Savvy AI Is Here to Write Fiction – WIRED

Posted: at 2:46 pm

A few years ago this month, Portland, Oregon artist Darius Kazemi watched a flood of tweets from would-be novelists. November is National Novel Writing Month, a time when people hunker down to churn out 50,000 words in a span of weeks. To Kazemi, a computational artist whose preferred medium is the Twitter bot, the idea sounded mildly tortuous. I was thinking I would never do that, he says. But if a computer could do it for me, Id give it a shot.

Kazemi sent off a tweet to that effect, and a community of like-minded artists quickly leapt into action. They set up a repo on Github, where people could post their projects and swap ideas and tools, and a few dozen people set to work writing code that would write text. Kazemi didnt ordinarily produce work on the scale of a novel; he liked the pith of 140 characters. So he started there. He wrote a program that grabbed tweets fitting a certain templatesome (often subtweets) posing questions, and plausible answers from elsewhere in the Twitterverse. It made for some interesting dialogue, but the weirdness didnt satisfy. So, for good measure, he had the program grab entries from online dream diaries, and intersperse them between the conversations, as if the characters were slipping into a fugue state. He called it Teens Wander Around a House. First novel accomplished.

GPT-2 cant write a novel; not even the semblance, if youre thinking Austen or Franzen.

Its been six years since that first NaNoGenMothats Generation in place of Writing. Not much has changed in spirit, Kazemi says, though the event has expanded well beyond his circle of friends. The Github repo is filled with hundreds of projects. Novel is loosely defined. Some participants strike out for a classic narrativea cohesive, human-readable talehard-coding formal structures into their programs. Most do not. Classic novels are algorithmically transformed into surreal pastiches; wiki articles and tweets are aggregated and arranged by sentiment, mashed-up in odd combinations. Some attempt visual word art. At least one person will inevitably do a variation on meow, meow, meow... 50,000 times over.

That counts, Kazemi says. In fact, its an example on the Github welcome page.

But one thing that has changed is the tools. New machine learning models, trained on billions of words, have given computers the ability to generate text that sounds far more human-like than when Kazemi started out. The models are trained to follow statistical patterns in language, learning basic structures of grammar. They generate sentences and even paragraphs that are perfectly readable (grammatically, at least) even if they lack intentional meaning. Earlier this month, OpenAI released GPT-2, among the most advanced of such models, for public consumption. You can even fine-tune the system to produce a specific styleGeorgic poetry, New Yorker articles, Russian misinformationleading to all sorts of interesting distortions.

GPT-2 cant write a novel; not even the semblance, if youre thinking Austen or Franzen. It can barely get out a sentence before losing the thread. But it has still proven a popular choice among the 80 or so NaNoGenMo projects started so far this year. One guy generated a book of poetry on a six hour flight from New York to Los Angeles. (The project also underlined the hefty carbon footprint involved in training such language models.) Janelle Shane, a programmer known for her creative experiments with cutting-edge AI, tweeted about the challenges shes run into. Some GPT-2 sentences were so well-crafted that she wondered if they were plagiarized, plucked straight from the training dataset. Otherwise, the computer often journeyed into a realm of dull repetition or uncomprehending surrealism.

No matter how much youre struggling with your novel, at least you can take comfort in the fact that AI is struggling even more, she writes.

Its a fun trick to make text that has this outward appearance of verisimilitude, says Allison Parrish, who teaches computational creativity at New York University. But from an aesthetic perspective, GPT-2 didnt seem to have much more to say than older machine learning techniques, she saysor even Markov chains, which have been used in text prediction since the 1940s, when Claude Shannon first declared language was information. Since then, artists have been using those tools to make the assertion, Parrish says, that language is nothing more than statistics.

Follow this link:

Text-Savvy AI Is Here to Write Fiction - WIRED

Posted in Ai | Comments Off on Text-Savvy AI Is Here to Write Fiction – WIRED

How Augmented Reality and Artificial Intelligence Are Helping Entrepreneurs Create a Better Customer Experience – Entrepreneur

Posted: at 2:46 pm

Expert insights on taking personalization to the next level.

November25, 20194 min read

Opinions expressed by Entrepreneur contributors are their own.

Michael Bower helps companies provide cool experiences to their customers on the web. As CEO of Sellry, an ecommerce solutions company, he combines creativity with the latest technology to propel brands into the future. Alongside clients, Sellry works to reimagineand designthe future of ecommerce.

What new technology do you think will greatly impact consumer-facing startups in the near future?

AR is going to completely change many industries. We've seen applications where you can just point your phone at something and it'll tell you about it. We've also seen smart mirrors. There's even APIs where it'll measure your body from a photograph with a degree of accuracy. A lot of these APIs are nearly real-time. Some of them can even look at multiple different subjects at the same time and figure out many things about them. It's the future.

Related:The Future ofAugmented Reality(Infographic)

How soon do you think this will be a common practice?

We did an experiment this year. We built out an augmented reality experience of an imaginary office space for an ecommerce trade show. We wanted to see how relatable it was.Would people get it? Would they understand it? And what we found was, it's still a little bit early. Enterprises are toying with the idea, some of them are trying things, especially in the sports and entertainment industries. Fashion is obviously trying things for sizing. I think that we're looking at 2021 for when we pass that early adopter stage and start getting into the early majority.

What industry do you think will be the first to benefit from AR?

I think certain industries like real estate, architecture and B2B sales will adopt it faster because AR will give them the ability in the fields to conduct a demonstration or to evaluate a pitch better. There are enormous companies in those spaces already investing absolutely insane amounts of money into AR.

What about artificial intelligence? How are companies using it to enhance the customer experience?

If you've ever looked at the cookies that are stored on your machine, they're crazy. Some of them will think that you're probably into things that you're totally not into. I've looked at my cookies and been like, Wow, they think I'm interested in soap operas, which I'm totally not. Cookies are notoriously unreliable. And that's what most people are using for advertising and retargeting. Basically, its a "spray and pray" approach. What we want to do is help companies take better advantage of their audience, the people that are on their site and telling them real things about themselves.

Related:4EcommerceTrends to Watch

Can you give an example?

Let's say that we're dealing with a supplements company. Right now, we're segmenting based on a few factors, and we think we know who our customer is. And we've done a lot of testing that is assumption based. Meaning we're taking things that we already know and we're using that to drive our decision-making. Now, the AI tooling for this stuff is already in principle there, where you can just turn on artificial intelligence and it'll figure out who your customer is, how you should message them, what is the cadence of doing that. But right now for the mid-market and even for certain specialized enterprise markets, the AI tooling takes a long time to deploy, so it's not quite there all the way in a deployable manner.Within a couple of years it will be.

How can companies that currently dont use data science prepare to implement artificial intelligence as it becomes more widely available?We encourage companies to really dial into customer discovery and understanding the customer deeply. And then build out a higher fidelity version of current generation personalization and segmentation going on. And then based on that, within the next couple of years we're wanting to have the ability to deploy for our clients technical wizardry that's going to basically take those human-defined segments and personas, and take them even farther. AI-based segmentation and the ability for the mid-market to adopt AI is going to be super amazing and exciting.

Excerpt from:

How Augmented Reality and Artificial Intelligence Are Helping Entrepreneurs Create a Better Customer Experience - Entrepreneur

Posted in Ai | Comments Off on How Augmented Reality and Artificial Intelligence Are Helping Entrepreneurs Create a Better Customer Experience – Entrepreneur

Knowledge mining will drive the next wave of AI – TechHQ

Posted: at 2:46 pm

If 2019 taught us anything, it was that every technology vendor, large and small, had to have a stance on Artificial Intelligence (AI) and the software automation advantages it can deliver.

Some vendors got so excited about AI and the Machine Learning (ML) that allows intelligence engines to get smarter, they forgot to talk about so-called digital transformation. But that was just for a while, not for long, obviously.

Industry spin and subterfuge notwithstanding, AI may now have another chapter to deliver and it comes in the shape of Knowledge Mining. But before we understand what it is, lets remember how we got here.

Knowledge Mining stems from Data Mining, a term that was popularized in the nineties and carried us through the millennium. Data Mining is an interdisciplinary process incorporating statistics, mathematical modelling and pattern recognition and other aspects of information analytics.

In basic terms, Data Mining involves sifting through massive data sets to establish patterns to create what are known as association rules (rather like an IF/THEN statement) to direct action based upon the data relationships discovered. People do still talk about Data Mining, but AI has in many cases displaced the.

While Data Mining has been useful, information scientists argue that it was restricted to creating comparatively narrow AI models i.e. it was useful for doing (and learning) one specific thing, such as a tracking one type of image, categorizing one work process or some other defined and essentially discrete task.

Knowledge Mining widens the length, breadth and density of the intelligence model being constructed.

Data Mining centralizes on the processing of relatively well-structured information sets, often held in databases where information is nicely deduplicated, verified and parsed into appropriate fields. Knowledge Mining goes deeper in that it involves the ingestion of massive datasets spanning structured, semi-structured and unstructured information.

Knowledge Mining also embraces a more complex level of business logic and is capable of understanding where connected information streams come together to form real world business process.

According to John JG Chirapurath, general manager, Azure Data & AI at Microsoft, More than two-thirds (68 percent) of respondents to a recent Harvard Business Review Analytic Services survey believe knowledge mining is important to achieving their companies strategic goals in the next 18 months.

Chirapurath points to the challenge at hand on the road to Knowledge Mining. The central issue with old information mining techniques was that by the time the data was identified, classified and ratified, it was only fit for archiving. Where Knowledge Mining goes further is in its use of metadata to get the information about information this delivers, which speeds the entire analytics process up from the start.

This is of particular importance when we look at the ingestion of unstructured data into Knowledge Mining engines. Where that unstructured data comes in the form of videos, voicemails, emails, images or some other traditionally multi-form-factor shape, then we need to know what it relates to faster than using manual processes of classification performed by human beings.

Only when we can track information automatically and sidestep manual work can we start to get use Knowledge Mining for things like real-time anomaly detection.

With knowledge mining, it is now possible to train a system to recognize the key data to extract from a statement whether it is in a PDF, a scanned document, or spreadsheet format and to do it consistently. The same is true for more complex processes, such as allocating invoices to the right account or pulling data from investment documents, which can vary in their presentation, and using that data to validate investment terms, wrote Chirapurath, in an original article here.

Knowledge Mining is predicted to have most impact upon the enterprise organizations working in financial services, healthcare, manufacturing and legal services. As we enter the early stages of this technology, we can reasonably suggest that most customers wont do the mining themselves, it is more likely that they will buy it as a service from a cloud provider.

Awareness of Knowledge Mining is still comparatively new, so much so that most people arent even saying KM for short. Oops, we just did, so now you have the knowledge.

Read more:

Knowledge mining will drive the next wave of AI - TechHQ

Posted in Ai | Comments Off on Knowledge mining will drive the next wave of AI – TechHQ

Five Questions With a16z’s Vijay Pande on AI and Making New Drugs – Xconomy

Posted: at 2:46 pm

XconomyNational

In startup world these days, the word biotech is increasingly accompanied by computational and two, two-letter initialisms: AI and ML.

Those toolsartificial intelligence and machine learning, respectivelyhave been around for decades, but in recent years have become faster and cheaper, accelerating their use by those in the business of discovering and developing new drugs. Another startup looking to take advantage of those improvements, South San Francisco-based Genesis Therapeutics, has scored $4.1 million in seed funding and publicly joined the growing fray of biotechs with grand ambitions of disrupting the slow, costly process of discovering and developing new medicines.

Andreessen Horowitz, also known as a16z, led its seed round, one of a handful of seed-stage investments it has made in biotech. Felicis Ventures, another VC firm based in Silicon Valley, also invested. Genesis says it plans to focus on developing small molecule drug candidate for patients with severe and debilitating disorders, and that it aims to move ahead investigational drugs it discovers itself and in partnership with pharma.

The technology that underpins the company was invented in the Stanford University lab of a16z general partner Vijay Pande, who joined the firm in 2015 to lead its debut $200 million biotech fund. Since then the firm has invested in AI drug development outfits including Erasca, Insitro, and TwoXar, and raised $450 million for a second bio fund.

Pande recently talked with Xconomy about how AI will impact drug development, what differentiates Genesis, and why biotechs need to adopt a portfolio mindset. The conversation has been lightly edited and condensed for clarity.

Xconomy: What sets Genesis apart from the many AI drug development startups operating today?

Vijay Pande: This is technology that came out of my Stanford lab that I was running before I left to found the bio fund at Andreessen Horowitz, so Ive known [founder] Evan Feinberg for five to six years, and I know the technology very well, so it was very natural for me to get excited about that part. The part I think really differentiated Evans approach here was getting a really great drug hunter like [acting Chief Scientific Officer] Dr. [Peppi] Prasit involved very early. I think that hes often thought of as a drug hunters drug hunter, and Evan getting him on board I think is a huge win for filling that team and also a validation for the significance of that technology.

X: What is different about the software tools that Genesis plans to use to search for new drugs than the algorithms used by other such biotech startups?

VP: There [are] 200, 250 companies now in this AI/drug design space. Given the prevalence of tools like [open source ML framework] TensorFlow, algorithms in the public domain, and public data, it doesnt take much to build something just with those off-the-shelf pieces that looks pretty good, especially compared to what people could do before. All of those companies, if theyre using basically the same algorithms, the same tools, and the same data, theyre going to get the same answers as each other. So differentiation is really going to be key

Evan hasnt just done what most people do, which is take algorithms that people use in computer vision, from identifying cats on the internet and that type of thing. For images, its very clear what the [statistical] representation are: pixels. For molecules, its not clear at all. One of the key advances that Evan and Genesis made is in that area of representationhow to think about the right way to explain what the molecule is to a computer. They have figured out the right way to represent molecules such that AI and other algorithms can take advantage of that representation.

X: Genesis is a very early-stage company, especially compared to others a16z has backed in the space. What have these advanced algorithms allowed it to do that the firm believes will allow it to develop new drugs more efficiently than others?

VP: Five years ago, people nearly had to come up with the right features by hand. In a sense, their brains were the first part of the neural network. With deep learning right now, I think the big difference is that if you have the right representation, deep learning can learn the right features from there. Next Page

Sarah de Crescenzo is an Xconomy editor based in San Diego. You can reach her at sdecrescenzo@xconomy.com.

See the original post here:

Five Questions With a16z's Vijay Pande on AI and Making New Drugs - Xconomy

Posted in Ai | Comments Off on Five Questions With a16z’s Vijay Pande on AI and Making New Drugs – Xconomy

‘The AI will see you now.’ How tech might alter the doctor-patient relationship – KUOW News and Information

Posted: at 2:46 pm

On this weeks episode of Primed, we talk to Dr. Eric Topol, a cardiologist whose book "Deep Medicine" explores the impact of AI technology on health care.

Dr. Topol believes AI can help doctors build a more nuanced model of their patients profiles a model that more accurately represents the complex human beings who need care.

Three years ago, Dr. Topol was in excruciating pain.

He had just had his knee replaced. After the surgery, his leg was swollen and purple, and the pain was so intense, even opiates didnt soothe it.

He told his doctor that he couldnt sleep and had been crying because of the pain. Instead of trying to heal his knee, the doctor told him to get some antidepressants.

This was the moment Dr. Topol realized that something was broken in the healthcare system.

His orthopedist was seeing so many patients, he didnt have time to listen Dr. Topol. He also didn't have time to review Topol's medical history or he would have known Topol had a medical condition that explained what was causing the knee to heal badly.

The term health care' is off base, Dr. Topol said. We don't really have care. Its rare to have doctors providing true care even though they want to, because they're squeezed to the hilt.

Dr. Topol believes doctors are so busy, they dont have time to gather the necessary context for their patients. And even if they had more time, there's just too much data for each human being, he said.

Hes come to believe that AI technology could bring back the care in healthcare by providing doctors with a more fully developed profile of each patient. That could free doctors to focus on developing relationships with their patients rather than sorting through data.

AI can also give more autonomy to patients, so that they can use their devices to generate data and algorithms to help interpret it, Dr. Topol said.

He also thinks AI will be able to diagnose common problems, like skin rashes, ear infections, or urinary tract infections.

Letting an AI do that work could give human doctors more time to focus on one-on-one interactions with patients who are dealing with more complicated issues, he said.

Dr. Topol is optimistic that improvements in AI technology can create a more humane, effective health care system.

But the role that AI will play in medicine is still undefined, and it's difficult to know what the relationship between human doctors, AI medical technology, and patients will look like in the future. That relationship may depend on who ends up developing and deploying the technology.

Dr. Topol said, so far, the healthcare system has resisted innovation.

"The innovation needs to come from the outside," he said. "So tech titans like Amazon and Microsoft and Google and others, they're definitely going to be part of this."

This means that private companies may shape the future of the medical field.

The question is, is their priority going to be to help consumers achieve a better patient-doctor relationship? Or is it going to be to improve their revenue and their enterprise?, Dr. Topol asked.

What is their priority? I dont think they know that yet.

Listen to this week's episode of Primed to hear our full interview with Dr. Topol.

Music this episode includes Ripples on an Evaporated Lake by Raymond Scott.

See more here:

'The AI will see you now.' How tech might alter the doctor-patient relationship - KUOW News and Information

Posted in Ai | Comments Off on ‘The AI will see you now.’ How tech might alter the doctor-patient relationship – KUOW News and Information

To secure a safer future for AI, we need the benefit of a female perspective – The Guardian

Posted: at 2:46 pm

Everybody knows (or should know) by now that machine learning (which is what most current artificial intelligence actually amounts to) is subject to bias. Last week, the New York Times had the idea of asking three prominent experts in the field to talk about the bias problem, in particular the ways that social bias can be reflected and amplified in dangerous ways by the technology to discriminate against, or otherwise damage, certain social groups.

At first sight, the resulting article looked like a run-of-the-mill review of what has become a common topic except for one thing: the three experts were all women. One, Daphne Koller, is a co-founder of the online education company Coursera; another, Olga Russakovsky, is a Princeton professor who is working to reduce bias in ImageNet, the data set that powered the current machine-learning boom; the third, Timnit Gebru, is a research scientist at Google in the companys ethical AI team.

Reading the observations of these three women brought to the surface a thought thats been lurking at the back of my mind for years. It is that the most trenchant and perceptive critiques of digital technology and particularly of the ways in which it has been exploited by tech companies have come from female commentators. The thought originated ages ago as a vague impression, then morphed into an intuitive correlation and eventually surfaced as a conjecture that could be examined.

Perhaps female acuity towards tech might be a reflection of the fact that toys for boys have less attraction for women

So I spent a few hours going through a decades-worth of electronic records reprints, notes and links. What I found is an impressive history of female commentary and a gallery of more than 20 formidable critics. In alphabetical order, they are Emily Bell, danah boyd, Joy Buolamwini, Robyn Caplan, Kate Crawford, Renee DiResta, Joan Donovan, Rana Foroohar, Megan E Garcia, Seda Grses, Mireille Hildebrandt, Alice E Marwick, Helen Nissenbaum, Cathy ONeil, Julia Powles, Margaret Roberts, Sarah T Roberts, Kara Swisher, Astra Taylor, Zeynep Tufekci, Sherry Turkle, Judy Wajcman, Meredith Whittaker, and Shoshana Zuboff. If any of them are new to you, any good search engine will find them and their work.

I make no claims for the statistical representativeness of this sample. It might simply be the result of confirmation bias. Because of this column, I read more tech commentary than is good for anyone and it could be that the stuff that sticks in my memory happens to resonate with my views.

It also goes without saying that there are plenty of trenchant male critics out there too: one thinks of Franklin Foer, Farhad Manjoo and Nicholas Carr, to name just three. In recent times, we have seen prominent industry males such as Sean Parker and Roger McNamee suffering from investors remorse and confessing their horror at how things have turned out. And new organisations such as the Center for Humane Technology have appeared, dedicated to using the technology to create a world where technology supports our shared wellbeing, sense-making, democracy, and ability to tackle complex global challenges rather than undermining them.

Suppose for a moment, though, that my hunch is correct that the most powerful critiques of the technology, and of the industry based on it, come from female commentators. Why might that be? Could it be, for example, a reflection of the fact that that industry is demographically skewed and pathologically male-dominated and that its products, services and executives tend to reflect that?

It may also be no accident that in one area of digital technology machine learning women are likely to be more critical than men.

AI researchers are primarily people who are male, observed Olga Russakovsky in the New York Times piece, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities so its a challenge to think broadly about world issues. There are a lot of opportunities to diversify this pool and as diversity grows, the AI systems themselves will become less biased. Yeah, maybe.

Or perhaps female acuity towards technology might be a reflection of the fact that toys for boys have less attraction for women.

Years ago, Dave Barry, the great Miami Herald columnist, was lent a new Humvee when the vehicle was launched. He took his wife out for a spin. What can this thing do? she asked. Barry replied smugly that it could do cool stuff like inflating or deflating the tyres while going along at 70mph. She looked at him, open-mouthed, and then asked why in the name of God anyone would want to do that. Er, he replied, stumped.

Which only goes to show that there are no such things as awkward questions, only awkward answers. And, currently, those are the only kind machine learning evangelists have.

Look back to the futureDoes human history move in predictable cycles? The subject of a fascinating Guardian long read by Laura Spinney.

Lost in a mental fogWhats the cognitive impact of air pollution? Read the results of an alarming survey by Patrick Collison, co-founder of online payments platform Stripe and perhaps the most cerebral techie in Silicon Valley.

Dark age 2.0In 2029, the internet will make us act like medieval peasants. The title of a lovely acerbic essay in New York magazine by Max Read on what technology is doing to us.

Visit link:

To secure a safer future for AI, we need the benefit of a female perspective - The Guardian

Posted in Ai | Comments Off on To secure a safer future for AI, we need the benefit of a female perspective – The Guardian

Google’s new AI tool could help decode the mysterious algorithms that decide everything – ZDNet

Posted: at 2:46 pm

While most people come across algorithms every day, not that many can claim that they really understand how AI actually works. A new tool unveiled by Google, however, hopes to help common humans grasp the complexities of machine learning.

Dubbed "Explainable AI", the feature promises to do exactly what its name describes: to explain to users how and why a machine-learning model reaches its conclusions.

To do so, the explanation tool will quantify how much each feature in the dataset contributed to the outcome of the algorithm. Each data factor will have a score reflecting how much it influenced the machine-learning model.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

Users can pull out that score to understand why a given algorithm reached a particular decision. For example, in the case of a model that decides whether or not to approve someone for a loan, Explainable AI will show account balance and credit score as the most decisive data.

Introducing the new feature at Google's Next event in London, the CEO of Google Cloud, Thomas Kurian, said: "If you're using AI for credit scoring, you want to be able to understand why the model rejected a particular model and accepted another one."

"Explainable AI allows you, as a customer, who is using AI in an enterprise business process, to understand why the AI infrastructure generated a particular outcome," he said.

The explaining tool can now be used for machine-learning models hosted on Google's AutoML Tables and Cloud AI Platform Prediction.

Google had previously taken steps to make algorithms more transparent. Last year, it launched the What-If Tool for developers to visualize and probe datasets when working on the company's AI platform.

By quantifying data factors, Explainable AI unlocks further insights, as well as making those insights readable for more users.

"You can pair AI Explanations with our What-If tool to get a complete picture of your model's behavior," said Tracy Frey, director of strategy at Google Cloud.

In some fields, like healthcare, improving the transparency of AI would be particularly useful.

In the case of an algorithm programmed to diagnose certain illnesses, for example, it would let physicians visualize the symptoms picked up by the model to make its decision, and verify that those symptoms are not false positives or signs of different ailments.

The company also announced that it is launching a new concept of what it calls "model cards" short documents that provide snap information about particular algorithms.

SEE: Google makes Contact Center AI generally available

The documents are essentially an ID card for machine learning, including practical details about a model's performance and limitations.

According to the company, this will "help developers make better decisions about what models to use for what purpose and how to deploy them responsibly."

Two examples of model cards have already been published by Google providing details about a face detection algorithm and an object detection algorithm.

The face detection model card explains that the algorithm might be limited by the face's size, orientation or poor lighting.

Users can read about the model's outputs. performance, and limitations. For example, the face detection model card explains that the algorithm might be limited by the face's size, orientation or poor lighting.

The new tools and features announced today are part of Google's attempts to prove that it is sticking to its AI principles, which call for more transparency in developing the technology.

Earlier this year, the company dissolved its one-week-old AI ethics board, which was created to monitor its use of artificial intelligence.

See the original post here:

Google's new AI tool could help decode the mysterious algorithms that decide everything - ZDNet

Posted in Ai | Comments Off on Google’s new AI tool could help decode the mysterious algorithms that decide everything – ZDNet

GBT – AI Technology To Be Implemented Within Epsilon Program – AiThority

Posted: at 2:46 pm

Goal of Ensuring Ultimate Microchips Reliability

GBT Technologies Inc., a company specializing in the development of Internet of Things (IoT) and Artificial Intelligence (AI) enabled networking and tracking technologies, including its GopherInsight wireless mesh network technology platform and its Avant! AI, for both mobile and fixed solutions, announced that it is implementing its Avant! AI technology within Epsilon EDA (Electronic Design Automation) program with the goal of achieving increased reliability for microchips.

Read More: New IDC Spending Guide Sees Consumer Spending on Technology Reaching $1.69 Trillion in 2019

Avant! AI will be trained with IC (Integrated Circuit) reliability models, based on physics-of-failure mechanisms. These models will be classified for a wide variety of microchips types, among them microcontrollers, microprocessors, memories, power ICs and others. The system will read the microchips specifications and define reliability analysis to be automatically tested by Epsilon. As the design moves forward and a more physical layout is produced, the system will adapt to identify weak spots, predicting potential reliability failures due to physics phenomena like Negative Bias Temperature Instability (NBTI), Electromigration (EM), Hot Carrier Injection (HCI) and Time Dependent Dielectric Breakdown (TDDB). The system is targeting a chips reliability prediction to be addressed during early design stages, making correction easier. It is the goal that Epsilon will be able to provide a wide range of reliability predictions, ensuring reliable operation and efficient power consumption. Epsilon will predict, test and validate signals at risk. When potential failuresare identified, Epsilon will perform an Auto-Correct to resolve the issue. It is the goal of Epsilon to ensure that microchips will not overheat and fail due to excessive power consumption or faulty design. GBT believes that its reliability predictions, early addressing and auto-correction will become a key player when designing modern chips, especially for high reliability demand fields like military, aviation/space and medicine.

Read More: Microsoft Taps Canadian Start-up Mover.io to Ease Cloud File Migration to Microsoft 365

We identified the EDA field, a modern domain used to design integrated circuits (ICs), that we believe can significantly benefit from our AI technology, stated Danny Rittman, GBTs CTO. One of the major problems with our todays advanced chips, is their reliability. If a chip is not going through accurate electrical design for reliability, it can overheat, perform poorly or fail. We are now focused on enabling our analysis and auto-correction program, Epsilon, with the capability of predicting potential inner-chip nets that may overheat, cause poor performance or failure over time. Using our Avant! AI deep learning technology within Epsilon, the program will constantly monitor the chips design as it evolves, alerting about potential risks. Furthermore, by user permission, Epsilon will be able to perform Auto-Correction for the at-risk signals, creating a Correct-By-Construction chip design environment. Avant! will perform an over-time analysis prediction to indicate how long failure(s) may take for critical nets. This will enable IC design houses to work more efficiently with customer budgets, knowing a chips life span. Using Avant! AI for the IC reliability domain will ensure high reliability and performance ICs which are particularly crucial for areas like aviation, space exploration, military and medicine, where human lives depend on integrated circuits operation.

Read More: AI Does Not Have to Be a Zero-Sum Game

Excerpt from:

GBT - AI Technology To Be Implemented Within Epsilon Program - AiThority

Posted in Ai | Comments Off on GBT – AI Technology To Be Implemented Within Epsilon Program – AiThority

AI Discovered Nazca Lines That’ve Been Lost for 2,000 Years – Popular Mechanics

Posted: at 2:46 pm

Atlantide PhototravelGetty Images

Japanese researchers have used machine learning and artificial intelligence to identify 143 new Nazca Lines, also called geoglyphs, in Peru. Among the numerous new glyphs was one that was discovered entirely by AI, the first time to happen in the world. The New York Times reports that the team of researchers used satellite photography, 3D imaging, and AI to find the ancient geoglyphs that were impressed into the ground in a desert plain around 100 B.C. by Nazcas.

A press release from Yamagata University explains that the team used IBMs Power Systems servers in an aim to understand Nazca Lines as a whole. Researchers also utilized AI tech from the IBM Thomas J. Watson Research Center in New York to test whether or not AI could find new Nazca Lines.

And it worked. The team configured a machine learning program with an AI model which found a new glyph depicting a 16-foot tall humanoid figure standing on two feet." In their effort to examine whether or not the AI could discover new Nazca Lines, the research team shares that they introduced the capability to process large volumes of data with AI, which included hi-res aerial images at high speeds.

Sakai et al.

The Nazca people lived near Peru's southern coast over 2,000 years ago. While the creation of the Nazca Lines has been this the Nazca's claim to fame, this pre-Inca culture was also known for its pottery and textiles.

The Nazca lines themselves have caused speculation about what exactly they are and what purpose they serve. While some fictional works suggest that Nazca Lines might have been landing strips for alien aircraft, archaeologists suggest that they were ritual sites, a form of irrigation, or some form of ancient cosmic maps.

In 1994, the Nazca Lines were designated as a UNESCO World Heritage site, but that hasn't stopped people from defacing them. In 2014, Greenpeace activists left their footprints near the hummingbird glyph as they unveiled a sign calling for the use of renewable resources. Then, in 2018, a trucker intentionally drove his rig off-road and over some of the lines leaving behind three damaged "straight-line geoglyphs."

Masato Sakai, a professor of cultural anthropology who led the research effort, is concerned with the preservation of the Lines and hopes that these new discoveries will help protect them.

The most important point is not the discovery itself...If [the lines] become clearly visible, they will be protected as important cultural heritages says Sakai. The University says it will work with UNESCO and the Ministry of Culture in Peru to help preserve these newly discovered sites.

Source: The New York Times

Read the original:

AI Discovered Nazca Lines That've Been Lost for 2,000 Years - Popular Mechanics

Posted in Ai | Comments Off on AI Discovered Nazca Lines That’ve Been Lost for 2,000 Years – Popular Mechanics