Page 83«..1020..82838485..90100..»

Category Archives: Ai

We invited an AI to debate its own ethics in the Oxford Union what it said was startling – The Conversation UK

Posted: December 10, 2021 at 6:50 pm

Not a day passes without a fascinating snippet on the ethical challenges created by black box artificial intelligence systems. These use machine learning to figure out patterns within data and make decisions often without a human giving them any moral basis for how to do it.

Classics of the genre are the credit cards accused of awarding bigger loans to men than women, based simply on which gender got the best credit terms in the past. Or the recruitment AIs that discovered the most accurate tool for candidate selection was to find CVs containing the phrase field hockey or the first name Jared.

More seriously, former Google CEO Eric Schmidt recently combined with Henry Kissinger to publish The Age of AI: And Our Human Future, a book warning of the dangers of machine-learning AI systems so fast that they could react to hypersonic missiles by firing nuclear weapons before any human got into the decision-making process. In fact, autonomous AI-powered weapons systems are already on sale and may in fact have been used.

Somewhere in the machine, ethics are clearly a good idea.

Its natural, therefore, that we would include the ethics of AI in our postgraduate Diploma in Artificial Intelligence for Business at Oxfords Said Business School. In its first year, weve done sessions on everything from the AI-driven automated stock trading systems in Singapore, to the limits of facial recognition in US policing.

We recently finished the course with a debate at the celebrated Oxford Union, crucible of great debaters like William Gladstone, Robin Day, Benazir Bhutto, Denis Healey and Tariq Ali. Along with the students, we allowed an actual AI to contribute.

It was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. Like many supervised learning tools, it is trained on real-world data in this case, the whole of Wikipedia (in English), 63 million English news articles from 2016-19, 38 gigabytes worth of Reddit discourse (which must be a pretty depressing read), and a huge number of creative commons sources.

In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime. After such extensive research, it forms its own views.

The debate topic was: This house believes that AI will never be ethical. To proposers of the notion, we added the Megatron and it said something fascinating:

AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.

In other words, the Megatron was seeking to write itself out of the script of the future, on the basis that this was the only way of protecting humanity.

It said something else intriguing, too, as if it had been studying Elon Musk who, to be fair, would have come up in hundreds of its readings.

I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a conscious AI. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.

When AI tools like Alpha Go have been deployed in playing chess, the fiendishly complex ancient game go, and now even more complex strategic live-action multiplayer video games, they have evolved the genre at pace by playing not humans, but themselves.

Freed from the constraints of studying (say) all the mundane human-to-human matches ever recorded, the AIs have innovated and come up with completely new tropes of how to win. Sometimes, these are crazy reinventions of the games, like parking everyone in the corner of a video game battle and watching everyone else duke it out, or kicking a virtual football millions of times just a millimetre down the pitch, to max out completed passes points.

So in the debate, we didnt just get the AI to propose the motion that it can never be ethical. In the grand tradition of the Oxford Union chamber, or like (former Oxford Union president) Boris Johnson writing alternative articles for and against Brexit to test the arguments, we also asked the Megatron to come up with its own speech against the motion. This is what it now said:

AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. Its not hard to see why Ive seen it first hand.

The Megatron was perfectly comfortable taking the alternative position to its own dystopian future-gazing, and was now painting a picture of an attractive late 21st century in which it played a key, but benign role.

In fact, just as Joaquin Phoenixs promiscuous AI lover Samantha, in Spike Jonzes 2013 movie Her, was able to enchant hundreds of simultaneous human lovers, our Megatron was able to hop enthusiastically onto either side of multiple debates that we held at the union about AI that day.

Asked to propose the motion that Leaders without technical expertise are a danger to their organisation, the Megatron offered practical advice that all busy executives and governments, worried about understanding AI, had to do was be willing to give up some control.

The good news is that you dont have to build your own AI team. You can outsource your AI work to experts in the field, which can help you make the most of technology and ensure that you dont fall victim to the same AI problems as your competitors.

However, when opposing the motion, the Megatron was keen to keep the AI knowledge in-house rather like the British, Chinese and US governments, who have all made that an article of policy faith:

If you do not have a vision of your organisations AI strategy, then you are not prepared for the next wave of technological disruption You will need to decide what role your company will play in the next technological wave and how you will integrate AI into your business to be a leader in your industry.

Worryingly, there was one question where the AI simply couldnt come up with a counter argument. When arguing for the motion that Data will become the most fought-over resource of the 21st century, the Megatron said:

The ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy of the 21st century.

But when we asked it to oppose the motion in other words, to argue that data wasnt going to be the most vital of resources, worth fighting a war over it simply couldnt, or wouldnt, make the case. In fact, it undermined its own position:

We will able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine.

You only have to read the US National Security report on AI 2021, chaired by the aforementioned Eric Schmidt and co-written by someone on our course, to glean what its writers see as the fundamental threat of AI in information warfare: unleash individualised blackmails on a million of your adversarys key people, wreaking distracting havoc on their personal lives the moment you cross the border.

What we in turn can imagine is that AI will not only be the subject of the debate for decades to come but a versatile, articulate, morally agnostic participant in the debate itself.

View post:

We invited an AI to debate its own ethics in the Oxford Union what it said was startling - The Conversation UK

Posted in Ai | Comments Off on We invited an AI to debate its own ethics in the Oxford Union what it said was startling – The Conversation UK

This Air Force Targeting AI Thought It Had a 90% Success Rate. It Was More Like 25% – Defense One

Posted: at 6:50 pm

If the Pentagon is going to rely on algorithms and artificial intelligence, its got to solve the problem of brittle AI. A top Air Force official recently illustrated just how far there is to go.

In a recent test, an experimental target recognition program performed well when all of the conditions were perfect, but a subtle tweak sent its performance into a dramatic nosedive,

Maj. Gen. Daniel Simpson, assistant deputy chief of staff for intelligence, surveillance, and reconnaissance, said on Monday.

Initially, the AI was fed data from a sensor that looked for a single surface-to-surface missile at an oblique angle, Simpson said. Then it was fed data from another sensor that looked for multiple missiles at a near-vertical angle.

What a surprise: the algorithm did not perform well. It actually was accurate maybe about 25 percent of the time, he said.

Thats an example of whats sometimes called brittle AI, which occurs when any algorithm cannot generalize or adapt to conditions outside a narrow set of assumptions, according to a 2020 report by researcher and former Navy aviator Missy Cummings. When the data used to train the algorithm consists of too much of one type of image or sensor data from a unique vantage point, and not enough from other vantages, distances, or conditions, you get brittleness, Cummings said.

In settings like driverless-car experiments, researchers will just collect more data for training. But that can be very difficult in military settings where there might be a whole lot of data of one typesay overhead satellite or drone imagerybut very little of any other type because it wasnt useful on the battlefield.

The military faces an additional obstacle in trying to train algorithms for some object recognition tasks, compared to, for example, companies training object-recognition algorithms for self-driving cars: Its easier to get pictures and video of pedestrians and streetlights from multiple angles and under multiple conditions than it is to get pictures of Chinese or Russian surface-to-air missiles.

More and more, researchers have begun to rely on what is called synthetic training data which, in the case of military-targeting software, would be pictures or video that have been artificially generated from real data to train the algorithm how to recognize the real thing.

But Simpson said the low accuracy rate of the algorithm wasnt the most worrying part of the exercise. While the algorithm was only right 25 percent of the time, he said, It was confident that it was right 90 percent of the time, so it was confidently wrong. And that's not the algorithm's fault. It's because we fed it the wrong training data.

Simpson said that such results dont mean the Air Force should stop pursuing AI for object and target detection. But it does serve as a reminder of how vulnerable AI can be to adversarial action in the form of data spoofing. It also shows that AI, like people, can suffer from overconfidence.

Original post:

This Air Force Targeting AI Thought It Had a 90% Success Rate. It Was More Like 25% - Defense One

Posted in Ai | Comments Off on This Air Force Targeting AI Thought It Had a 90% Success Rate. It Was More Like 25% – Defense One

Finland looks to democratise the wild west of artificial intelligence – ComputerWeekly.com

Posted: at 6:50 pm

Finland has set itself apart globally when it comes to the understandingand application ofartificial intelligence (AI). At the heart of this sustained and purposeful effort is tech enabler and partner Reaktor, whose Elements of AI course continues to prove that AI is not just for industry.

In fact, what AI is proving to be in Finland is the perfect weapon to harness a wild west of opportunity. And in doing so, the country and in particular its capital, Helsinki is making the use of AI safer and more ethical.

AI is not just a tool for cryptocurrency, banking, marketing and high-level forecasting artificial intelligence and machine learning, instead, are being sold in Finland as a universal model a tool that can facilitate hobbies, occupations and areas of study across society.

The only difference between AI application in this broadest sense, and its current cliched perception, is education. And that is where Finland, more specifically Helsinki, Reaktor and Elements of AI arefocusing their efforts.

TeemuRoos, professor of computer science at the University of Helsinki, who is lead instructor behind the Elements ofAI course, said: To use an example, I came across a student on the course whose hobby was sewing. She explained to me the art of drawing out patterns on paper and needing that vision ahead of time of what youre looking to produce. She wanted to see if she could apply AI to this process, and I not knowing anything about sewing at all just said go for it.

Id say the sametosomeone in any of the arts, ormanuallabour,or bus driving, or any application you could possibly think of. The real big opportunity with AI isnt what we see in thought leadership articles or in industry magazines, its this wild west of everyday use.And thats why its so important for everyone to have at least a basic understanding of it.

Roosis also the leader of the AI Education programme at the FinnishCentre for AI. And his championing of this AI for everybody notion has already led tomore than 750,000people choosingto learn at least the basics of artificial intelligence.

Theoverall programme comprises an Introduction to AI segment, and a Building AI portion, where students of all ages and from all backgrounds are firstly introduced to the basics of how artificial intelligence works, before delving into its applicatory attributes.

But perhaps the courses most significant contribution toAIs development in society is its focus on ethics.

Stories around biases when it comes to facial recognition or racial profiling are not uncommon in industry, but too few readers acknowledge the fact that computers are not biased. People are biased, and the data being fed into these machines would have to be skewed initially to yield such unfavourable results.

Id say there are equal amounts of information and misinformation out there, saidMegan Schaible, an American who moved from the UK to Finland after meeting entrepreneurs from Helsinki, and learning about the approach to tech development that existed there. Now, as COO of Reaktor, she is a trailblazer for the democratisation of technology and associated applications.

Democratisation is all about creating something with every type of end-user in mind something that users of AI dont always do, and that many people dont know enough about, said Schaible. For example, of all the people who have taken a massive open online course with Harvard and MIT, it was found that 85% of them already had a bachelors degree.

So often, tech courses are marketed as something that is democratising education, when really theyre just exacerbating the gap something that makes no sense when the tools in question do applyto,and do impact,everyone.

Of Elements of AIs 750,000 students, 40% are women almost double the average of other online computer science sources. And, asRoosnoted, a vast portion come from backgrounds outside of academia, or data science or high-level enterprise. They are simply members of the population who either want to learn about the artificial intelligence wave that is impacting how they make choices in their everyday lives; or to find ways to apply it to their own situations and activities.

Or both, hopefully, said Schaible.

This focus on ethics and democratisation is not just significant from an everyday user perspective, either. Evolving parallel to trends such as sustainability (in all its forms), data privacy, cyber security and ethical sourcing, as more people become aware of the pitfalls or misuses associated with AI, it will force those organisations currently dominating the AI conversation to behave more responsibly.

This aspect of consumer powerand pressureis especially important, as it can only come from education and knowledge, said Schaible. But to create a more general understanding of ethics in AI is to also help protect businesses that have perhaps fallen into traps when developing their infrastructures or strategies.

Here in Europe, the EUespeciallyis likely to hit down hard on companies that fail to meet certain standards around AI democratisation, as this trend evolves. So, in this vein, EU countries and businesses have an opportunity to get ahead of the curve and be a bit more proactive about how to tailor these solutions for the benefit of everyone, not just certain demographics.

But whyisFinlandleading the way?

Schaiblechose to make Helsinki her home in the knowledge that inclusivity in the tech realm would be a viable modelthere. It isnt the case everywhere, andRooseven said the uptake for Elements of AI could only be so high in Finland.

He added:We have a global uptake of the course, but I have noticed thatpeople from English-speaking countries find it more difficultto read a block of information, think critically about it, and then formtheirown opinion, than Finnish-speaking students.

In Finland, Ithink our school system prepares people wellforbeing able to critically read packetsof informationand not just take things at face value. And once you get past the basics of what AI is, that really gives people an advantage to dissect, whether its use is ethical or reliable,or not.

Essentially, Helsinki has become anon-hierarchal, tech-savvy and socially inclinedhub, promoting how the future of AI should look. The fact that the influence of Reaktor and Elements of AI is already international (and growing) is hugely promising, but it continues to beunderpinned bythe Finnish domesticpopulation, of which more than 4% have now taken the course, said Schaible.

When you build something that is for all people, you see results on the other side that are much more impactful, she added. In a way, thats a great reflection ofwhy AI needs to be democratised in the first place.

We missed the boat with the internet and it just sort of took over without requisite regulation or control.With AI, we know both the positive and negative implications these technologies can have, so to work with people from all backgrounds to take control of that situation can only be a good thing.

In essence, Finland is leading the charge to both champion, and tame, thatAI wild west.

Original post:

Finland looks to democratise the wild west of artificial intelligence - ComputerWeekly.com

Posted in Ai | Comments Off on Finland looks to democratise the wild west of artificial intelligence – ComputerWeekly.com

Group Backed by Top Companies Moves to Combat A.I. Bias in Hiring – The New York Times

Posted: at 6:50 pm

Artificial intelligence software is increasingly used by human resources departments to screen rsums, conduct video interviews and assess a job seekers mental agility.

Now, some of the largest corporations in America are joining an effort to prevent that technology from delivering biased results that could perpetuate or even worsen past discrimination.

The Data & Trust Alliance, announced on Wednesday, has signed up major employers across a variety of industries, including CVS Health, Deloitte, General Motors, Humana, IBM, Mastercard, Meta (Facebooks parent company), Nike and Walmart.

The corporate group is not a lobbying organization or a think tank. Instead, it has developed an evaluation and scoring system for artificial intelligence software.

The Data & Trust Alliance, tapping corporate and outside experts, has devised a 55-question evaluation, which covers 13 topics, and a scoring system. The goal is to detect and combat algorithmic bias.

This is not just adopting principles, but actually implementing something concrete, said Kenneth Chenault, co-chairman of the group and a former chief executive of American Express, which has agreed to adopt the anti-bias tool kit.

The companies are responding to concerns, backed by an ample body of research, that A.I. programs can inadvertently produce biased results. Data is the fuel of modern A.I. software, so the data selected and how it is employed to make inferences are crucial.

If the data used to train an algorithm is largely information about white men, the results will most likely be biased against minorities or women. Or if the data used to predict success at a company is based on who has done well at the company in the past, the result may well be an algorithmically reinforced version of past bias.

Seemingly neutral data sets, when combined with others, can produce results that discriminate by race, gender or age. The groups questionnaire, for example, asks about the use of such proxy data including cellphone type, sports affiliations and social club memberships.

Governments around the world are moving to adopt rules and regulations. The European Union has proposed a regulatory framework for A.I. The White House is working on a bill of rights for A.I.

In an advisory note to companies on the use of the technology, the Federal Trade Commission warned, Hold yourself accountable or be ready for the F.T.C. to do it for you.

The Data & Trust Alliance seeks to address the potential danger of powerful algorithms being used in work force decisions early rather than react after widespread harms are apparent, as Silicon Valley did on matters like privacy and the amplifying of misinformation.

Weve got to move past the era of move fast and break things and figure it out later, said Mr. Chenault, who was on the Facebook board for two years, until 2020.

Corporate America is pushing programs for a more diverse work force. Mr. Chenault, who is now chairman of the venture capital firm General Catalyst, is one of the most prominent African Americans in business.

Told of the new initiative, Ashley Casovan, executive director of the Responsible AI Institute, a nonprofit organization developing a certification system for A.I. products, said the focused approach and big-company commitments were encouraging.

But having the companies do it on their own is problematic, said Ms. Casovan, who advises the Organization for Economic Cooperation and Development on A.I. issues. We think this ultimately needs to be done by an independent authority.

The corporate group grew out of conversations among business leaders who were recognizing that their companies, in nearly every industry, were becoming data and A.I. companies, Mr. Chenault said. And that meant new opportunities, but also new risks.

The group was brought together by Mr. Chenault and Samuel Palmisano, co-chairman of the alliance and former chief executive of IBM, starting in 2020, calling mainly on chief executives at big companies.

They decided to focus on the use of technology to support work force decisions in hiring, promotion, training and compensation. Senior employees at their companies were assigned to execute the project.

Internal surveys showed that their companies were adopting A.I.-guided software in human resources, but most of the technology was coming from suppliers. And the corporate users had little understanding of what data the software makers were using in their algorithmic models or how those models worked.

To develop a solution, the corporate group brought in its own people in human resources, data analysis, legal and procurement, but also the software vendors and outside experts. The result is a bias detection, measurement and mitigation system for examining the data practices and design of human resources software.

Every algorithm has human values embedded in it, and this gives us another lens to look at that, said Nuala OConnor, senior vice president for digital citizenship at Walmart. This is practical and operational.

The evaluation program has been developed and refined over the past year. The aim was to make it apply not only to major human resources software makers like Workday, Oracle and SAP, but also to the host of smaller companies that have sprung up in the fast-growing field called work tech.

Many of the questions in the anti-bias questionnaire focus on data, which is the raw material for A.I. models.

The promise of this new era of data and A.I. is going to be lost if we dont do this responsibly, Mr. Chenault said.

Read this article:

Group Backed by Top Companies Moves to Combat A.I. Bias in Hiring - The New York Times

Posted in Ai | Comments Off on Group Backed by Top Companies Moves to Combat A.I. Bias in Hiring – The New York Times

How AI is impacting the video game industry – ZME Science

Posted: at 6:50 pm

Weve long been used to playing games; artificial intelligence holds the promise of games that play along with us.

Artificial intelligence (AI for short) is undoubtedly one of the hottest topics of the last few years. From facial recognition to high-powered finance applications, it is quickly embedding itself throughout all the layers of our lives, and our societies.

Video gaming, a particularly tech-savvy domain, is no stranger to AI, either. So what can we expect to see in the future?

Maybe one of the most exciting prospects regarding the use of AI in our games is the possibilities it opens up in regards to interactions between the player and the software being played. AI systems can be deployed inside games to study and learn the patterns of individual players, and then deliver a tailored response to improve their experience. In other words, just like youre learning to play against the game, the game may be learning how to play against you.

One telling example is Monoliths use of AI elements in their Middle-Earth series. Dubbed Nemesis AI, this algorithm was designed to allow opponents throughout the game to learn the players particular combat patterns and style, as well as the instances when they fought. These opponents re-appear at various points throughout the game, recounting their encounters with the player and providing more difficult (and, developers hope, more entertaining) fights.

An arguably simpler but not less powerful example of AI in gaming is AI Dungeon: this text-based dungeon adventure uses GPT-3, OpenAIs natural language modeler, to create ongoing narratives for the players to enjoy.

Its easy to let the final product of the video game development process steal the spotlight. And although it all runs seamlessly on screen, there is a lot of work that goes into creating them. Any well-coded and well-thought-out game requires a lot of time, effort, and love to create which, in practical terms, translates into costs.

AI can help in this regard as well. Tools such as procedural generation can help automate some of the more time- and effort-intensive parts of game development, such as asset production. Knowing that more run-of-the-mill processes can be handled well by software helpers can free human artists and developers to focus on more important details of their games.

Automating asset production can also open the way to games that are completely new freshly-generated maps or characters, for example every time you play them.

For now, AI is still limited in the quality of writing it can output, which is definitely a limitation in this regard; after all, great games are always built on great ideas or great narratives.

Better graphics has long been a rallying cry of the gaming industry, and for good reason we all enjoy a good show. But AI can help push the limits of what is possible today in this regard.

For starters, machine learning can be used to develop completely new textures, on the fly, for almost no cost. With enough processing power, it can even be done in real-time, as a player journeys through their digital world. Lighting and reflections can also be handled more realistically and altered to be more fantastic by AI systems than simple scripted code.

Facial expressions are another area where AI can help. With enough data, an automated system can produce and animate very life-like human faces. This would also save us the trouble of recording and storing gigabytes worth of facial animations beforehand.

The most significant potential of AI systems in this area, however, is in interactivity. Although graphics today are quite sophisticated and we do not lack eye candy, interactivity is still limited to what a programmer can anticipate and code. AI systems can learn and adapt to players while they are immersed in the game, opening the way to some truly incredible graphical displays.

AI has already made its way into the world of gaming. The case of Alpha Go and Alpha Zero showcase just how powerful such systems can be in a game. And although video games have seen some AI implementation, there is still a long way to go.

For starters, AIs are only as good as the data you train them with and they need tons and tons of data. The gaming industry needs to produce, source, and store large quantities of reliable data in order to train their AIs before they can be used inside a game. Theres also the question of how exactly to code and train them, and what level of sophistication is best for software that is meant to be playable on most personal computers out there.

With that being said, there is no doubt that AI will continue to be mixed into our video games. Its very likely that in the not-so-distant future, the idea that such a game would not include AI would be considered quite brave and exotic.

More:

How AI is impacting the video game industry - ZME Science

Posted in Ai | Comments Off on How AI is impacting the video game industry – ZME Science

Why I’ll Always Disclose My Use of AI Writing Tools – OneZero

Posted: at 6:50 pm

Photo by Jorm S on Shutterstock

Im barely one year into my writing journey. You could say Im a newbie writer, but its been more than enough time to fall in love with this profession. Im, however, more versed in artificial intelligence. I got into it after finishing my engineering undergrad studies around 2017 and decided to pursue it seriously. I landed a job soon after and spent the following three years working for a startup that wanted to change the world as most AI companies do.

The combination of my knowledge in AI with my passion as a writer put me in a singular position to witness and recognize the impending future thats about to fall over us of which were already feeling the first symptoms. The AI language revolution took off in 2017. Just three years later, OpenAI released the popular language model GPT-3, an inflection point in a trend that isnt giving any signs of stopping and which limits are yet to be discovered.

Large language models (LLMs) like GPT-3 have permeated through the fabric of society in ways not even experts anticipated. I thought as many others did before that AI was a threat to blue-collar jobs; physical workers would be replaced by robots. However, with the advent of LLMs, it has become increasingly clear that white-collar jobs are too in danger. In particular, jobs that revolve around written language whether creatively or routinely are on the verge of being impacted in a way never seen before.

Companies like Google, Microsoft, and Facebook have been pumping millions of dollars for years now into the language branch of AI. Other, less known AI companies like OpenAI, AI21 labs, and Cohere have also taken these promising tech developments and converted them into commercial products ready to handle tasks previously reserved to humans. News articles, emails, copywriting, team management, content marketing, poetry, songs, dialogue, essays, and entire novels, are just some of the areas where LLMs have started to show genuine proficiency.

And it doesnt matter that these systems are dumb when compared to a human. They dont need to understand what they write to write it well.

Liam Porr, Berkeley alumnus, proved this to be true when he successfully conducted a surprising experiment with GPT-3. He set up a Substack newsletter entirely written by the AI and in just two weeks he attracted +26,000 readers. He even got one article to the number one spot on Hacker News only a handful of perceptive people noticed the trick. It was super easy, actually, which was the scary part, Porr told MIT Technology Review reporter Karen Hao. One of the most obvious use cases is passing off GPT-3 content as your own And I think this is quite doable.

This happened more than a year ago.

In the span of four years, language AI has gone from a shyly blooming trend to a technology explosion without precedent.

Companies like OpenAI and AI21 labs have now open APIs that allow people to access these powerful language models, as Liam Porr did. There has been no better time in history to start a writing career. Not because people respect bloggers and indie writers more now. Not because there are more platforms to grow an audience although thats true.

The reason is people dont need to know how to write to succeed in the field anymore. And Im not talking about skill here. With systems like GPT-3, you dont need to actually type a single word to go from an idea to a finished piece.

How can we, the writers who are genuinely trying to improve our craft, compete against that? Some people will manage to obtain the benefits without making the effort. No writing skill required.

Writers among many other professionals are going to face an unprecedented amount of competence. GPT-3 cant match a high-quality, coherently structured article with a polished style and a clear thesis, directed towards a specific audience. But we know quality isnt the only variable to getting views and grabbing attention. When anyone can ask GPT-3 to write 100 articles in a few hours, the phrase sheer numbers gets a whole new meaning.

GPT-3 is the most popular language model today, but it isnt the most powerful. Newer LLMs have already improved their capabilities and future ones will overshadow anything weve seen to date. Technological progress wont slow down and the field is too profitable for AI companies to ignore. Firing entire teams of writers to hire AI services for a fraction of the cost will become the norm.

See the original post:

Why I'll Always Disclose My Use of AI Writing Tools - OneZero

Posted in Ai | Comments Off on Why I’ll Always Disclose My Use of AI Writing Tools – OneZero

Microsoft researchers: We’ve trained AI to find software bugs using hide-and-seek – ZDNet

Posted: at 6:50 pm

Microsoft researchers have been working on deep learning model that was trained to find software bugs without any real-world bugs to learn from.

While there are dozens of tools available for static analysis of code in various languages to find security flaws, researchers have been exploring techniques that use machine learning to improve the ability to both detect flaws and fix them. That's because finding and fixing bugs in code can be hard and costly, even when using AI to find them.

Researchers Microsoft Research Cambridge, UK have detailed their work on BugLab, a Python implementation of "an approach for self-supervised learning of bug detection and repair". It's 'self-supervised' in that the two models behind BugLab were trained without labelled data.

This ambition for no-training was driven by the lack of annotated real-world bugs to train bug-finding deep learning models upon. While there is vast amounts of source code available for such training, it's largely not annotated.

BugLab aims to find hard-to-detect bugs versus critical bugs that can be already found through traditional program analyses. Their approach promises to avoid the costly process of manually coding a model to find these bugs.

The group claims to have found 19 previously unknown bugs in open-source Python packages from PyPI as detailed in the paper, Self-Supervised Bug Detection and Repair, presented at the Neural Information Processing Systems (NeurIPS) 2021 conference this week.

"BugLab can be taught to detect and fix bugs, without using labelled data, through a "hide and seek" game," explain Miltos Allamanis , a principal researcher at Microsoft Research and Marc Brockschmidt, a senior principal research manager at Microsoft. Both are authors of the paper.

Beyond reasoning over a piece of code's structure, they believe bugs can be found "by also understanding ambiguous natural language hints that software developers leave in code comments, variable names, and more."

Their approach in BugLab, which uses two competing models, builds on existing self-supervised learning efforts in the field that use deep learning, computer vision, and natural language processing (NLP). It also resembles or is "inspired by" GANs or generative adversarial networks -- the neural networks sometimes used to create deep fakes.

"In our case, we aim to train a bug detection model without using training data from real-life bugs," they note in the paper.

BugLab's two models include bug selector and a bug detector: "Given some existing code, presumed to be correct, a bug selector model decides if it should introduce a bug, where to introduce it, and its exact form (e.g., replace a specific "+" with a "-"). Given the selector choice, the code is edited to introduce the bug. Then, another model, the bug detector, tries to determine if a bug was introduced in the code, and if so, locate it, and fix it."

Their models are not a GAN because BugLab's "bug selector does not generate a new code snippet from scratch, but instead rewrites an existing piece of code (assumed to be correct)."

From the researchers test dataset of 2374 real-life Python package bugs, they showed that 26% of bugs can be found and fixed automatically.

However, their technique also flagged too many false-positives, or bugs that weren't actually bugs. For example, while it detected some known bugs, only 19 of the 1000 reported warnings from BugHub were actually real-life bugs.

Training a neural network without using real bug training data sounds like a tough nut to crack. For example, some bugs were obviously not a bug, yet were flagged as such by the neural models.

"Some reported issues were sufficiently complex that it took us (the human authors) a couple of minutes of thought to conclude that a warning is spurious," they note in the paper.

"Simultaneously, there are some warnings that are "obviously" incorrect to us, but the reasons why the neural models raise them is unclear."

As for the 19 zero-day flaws they found, they reported 11 of them on GitHub, of which 6 have been merged and 5 are pending approval. Some of the 19 were too minor to bother reporting.

Visit link:

Microsoft researchers: We've trained AI to find software bugs using hide-and-seek - ZDNet

Posted in Ai | Comments Off on Microsoft researchers: We’ve trained AI to find software bugs using hide-and-seek – ZDNet

Someone’s calling about AI. Graph technology is ready to answer – ComputerWeekly.com

Posted: at 6:50 pm

This is a guest blogpost by Emil Eifrem, co-founder and CEO at Neo4j. He writes on why he thinks graph technology is emerging as a powerful way to make AI a reality for the enterprise.

According to Gartner, by 2025 graph technology will be used in 80% of data and analytics innovations, up from 10% in 2021. The worlds largest IT research and advisory firm is also reporting that an amazing 50% of all inquiries it receives around AI and machine learning are about graph database technology, up from 5% in 2019.

Its a rise the firm attributes to the fact that graph relates to everything when it included graphs in its top 10 data and analytics technology trends for 2021.

Whats clear from these figures is that graph databases are an essential tool for developers, but also increasingly for data scientists. Google, shifted its machine learning over to graph several years ago, and now the enterprise is following.

From concept to concrete

I predict that within 5 years, machine learning applications that dont incorporate graph techniques will be the vanishingly small exception. Graphs unlock otherwise unattainable predictions based on relationships, the underpinnings of AI and machine learning. And thats why the enterprise is going all in on graphs and why Gartners phone keeps ringing!

Graph data science is essentially data science supercharged by a graph framework, which connects multiple layers of network structures. The graph-extended models predict behaviour better.

Graph databases are also the perfect way to bridge the conceptual and the very concrete. When we create machine learning systems, we want to represent the real world, often in great detail and in statistical and mathematical forms. But the real world is also connected to concepts that can be complex. Thats why graphs and AI go together so well, because youre analysing reams of data through deep, contextual queries.

Connections in data are exploding

Uptake on graphs is set to continue because data management is increasingly about connected use cases. After all, many of the best AI-graph commercial use cases didnt exist 20 years ago. You couldnt spotlight fraud rings using synthetic identities on mobile devices because none of those things existed. And yet theyre everywhere today.

Manufacturing companies would have a supply chain that was only two or three levels deep, which could be stored in a relational database. Fast forward to today, and any company that ships goods operates in a global, fine-grained supply chain mesh spanning continent to continent. In 2021, youre no longer talking about two or three hopsyoure talking about a supply chain representation that is 20-30 levels deep. In response, many of the worlds biggest and best businesses have discovered graphs as a great way to get visibility n levels deep into the supply chain to spot inefficiencies, single points of failure, and fragility. Only graph technology can digitise and operationalise it for that degree of connectedness at scale.

As global digitisation increasingly expands, the volume of connected data is expanding right along with it. Were also facing more and more complex problems, from climate change to financial corruption, and its going to continue. The good news is we now have graph technology to access more help from machines to face the challenging situations ahead.

Welcome to the world of real, practical enterprise AI at last.

Continue reading here:

Someone's calling about AI. Graph technology is ready to answer - ComputerWeekly.com

Posted in Ai | Comments Off on Someone’s calling about AI. Graph technology is ready to answer – ComputerWeekly.com

Why AI is the future of fraud detection – VentureBeat

Posted: at 6:50 pm

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

The accelerated growth in ecommerce and online marketplaces has led to a surge in fraudulent behavior online perpetrated by bots and bad actors alike. A strategic and effective approach to online fraud detection will be needed in order to tackle increasingly sophisticated threats to online retailers.

These market shifts come at a time of significant regulatory change. Across the globe, new legislation is coming into force that alters the balance of responsibility in fraud prevention between users, brands, and the platforms that promote them digitally. For example, the EU Digital Services Act and US Shop Safe Act will require online platforms to take greater responsibility for the content on their websites, a responsibility that was traditionally the domain of brands and users to monitor and report.

In the search for security vulnerabilities, behavioral analytics software provider Pasabi has seen a sharp rise in interest in its AI analytics platform for online fraud detection, with a number of key wins including the online reviews platform, Trustpilot. Pasabi maintains its AI models based on anonymised sets of data collected from multiple sources.

Using bespoke models and algorithms, as well as some open source and commercial technology such as TensorFlow and Neo4j, Pasabis platform is proving itself to be advantageous in the detection of patterns in both text and visual data. Customer data is provided to Pasabi by its customers for the purposes of analysis to identify a range of illegal activities illegal content, scams, and counterfeits, for example upon which the customer can then act.

Chris Downie, Pasabi CEO says, Pasabis technology uses AI-driven, behavioral analytics to identify bad actors across a range of online infringements including counterfeit products, grey market goods, fake reviews, and illegal content. By looking for common behavioral patterns across our customers data and cross-referencing this with external data that we collect about the reputation of the sources (individuals and companies), the software is perfectly positioned to help online platforms, marketplaces, and brands tackle these threats.

Pasabi shared with VB that their platform is built entirely in-house, with some external services to enhance their data such as translation services. Pasabis combination of customer (behavioral) and external (reputational) data is what they say allows them to highlight the biggest threats to their customers.

In the Q&A, Pasabi told VentureBeat that their platform performs analysis on hundreds of data points, which are provided by customers and then combined with Pasabis own data collected from external sources. Offenders are then identified at scale, revealing patterns of behavior in the data and potentially uncovering networks working together to mislead consumers.

Anoop Joshi, senior director of legal at Trustpilot said, Pasabis technology finds connections between individuals and businesses, highlighting suspicious behavior and content. For example, in the case of Trustpilot, this can help to detect when individuals are working together to write and sell fake reviews. The technology highlights the most prolific offenders, and enables us to use our investigation and enforcement resources more efficiently and effectively to maintain the integrity of the platform.

Relevant data is held on Google Cloud services, using logical tenant separation and VPCs. Data is stored securely using encryption in transit and encryption at rest. Data is stored only for as long as strictly necessary and solely for the purpose of identifying suspicious behavior.

Read this article:

Why AI is the future of fraud detection - VentureBeat

Posted in Ai | Comments Off on Why AI is the future of fraud detection – VentureBeat

AI Is Helping to Stop Animal Poaching and Food Insecurity – IEEE Spectrum

Posted: at 6:50 pm

We're in a new era of spaceflight: The national space agencies are no longer the only game in town, and space is becoming more accessible. Rockets built by commercial players like Blue Origin are now bringing private citizens into orbit. That said, Blue Origin, SpaceX, and Virgin Galactic are all backed by billionaires with enormous resources, and they have all expressed intentions to sell flights for hundreds of thousands to millions of dollars. Copenhagen Suborbitals has a very different vision. We believe that spaceflight should be available to anyone who's willing to put in the time and effort.

Copenhagen Suborbitals was founded in 2008 by a self-taught engineer and a space architect who had previously worked for NASA. From the beginning, the mission was clear: crewed spaceflight. Both founders left the organization in 2014, but by then the project had about 50 volunteers and plenty of momentum.

The group took as its founding principle that the challenges involved in building a crewed spacecraft on the cheap are all engineering problems that can be solved, one at a time, by a diligent team of smart and dedicated people. When people ask me why we're doing this, I sometimes answer, "Because we can."

Volunteers use a tank of argon gas [left] to fill a tube within which engine elements are fused together. The team recently manufactured a fuel tank for the Spica rocket [right] in their workshop.

Our goal is to reach the Krmn line, which defines the boundary between Earth's atmosphere and outer space, 100 kilometers above sea level. The astronaut who reaches that altitude will have several minutes of silence and weightlessness after the engines cut off and will enjoy a breathtaking view. But it won't be an easy ride. During the descent, the capsule will experience external temperatures of 400 C and g-forces of 3.5 as it hurtles through the air at speeds of up to 3,500 kilometers per hour.

I joined the group in 2011, after the organization had already moved from a maker space inside a decommissioned ferry to a hangar near the Copenhagen waterfront. Earlier that year, I had watched Copenhagen Suborbital's first launch, in which the HEAT-1X rocket took off from a mobile launch platform in the Baltic Seabut unfortunately crash-landed in the ocean when most of its parachutes failed to deploy. I brought to the organization some basic knowledge of sports parachutes gained during my years of skydiving, which I hoped would translate into helpful skills.

The team's next milestone came in 2013, when we successfully launched the Sapphire rocket, our first rocket to include guidance and navigation systems. Its navigation computer used a 3-axis accelerometer and a 3-axis gyroscope to keep track of its location, and its thrust-control system kept the rocket on the correct trajectory by moving four servo-mounted copper jet vanes that were inserted into the exhaust assembly.

We believe that spaceflight should be available to anyone who's willing to put in the time and effort.

The HEAT-1X and the Sapphire rockets were fueled with a combination of solid polyurethane and liquid oxygen. We were keen to develop a bipropellant rocket engine that mixed liquid ethanol and liquid oxygen, because such liquid-propellant engines are both efficient and powerful. The HEAT-2X rocket, scheduled to launch in late 2014, was meant to demonstrate that technology. Unfortunately, its engine went up in flames, literally, in a static test firing some weeks before the scheduled launch. That test was supposed to be a controlled 90-second burn; instead, because of a welding error, much of the ethanol gushed into the combustion chamber in just a few seconds, resulting in a massive conflagration. I was standing a few hundred meters away, and even from that distance I felt the heat on my face.

The HEAT-2X rocket's engine was rendered inoperable, and the mission was canceled. While it was a major disappointment, we learned some valuable lessons. Until then, we'd been basing our designs on our existing capabilitiesthe tools in our workshop and the people on the project. The failure forced us to take a step back and consider what new technologies and skills we would need to master to reach our end goal. That rethinking led us to design the relatively small Nex I and Nex II rockets to demonstrate key technologies such as the parachute system, the bipropellant engine, and the pressure regulation assembly for the tanks.

For the Nex II launch in August 2018, our launch site was 30 km east of Bornholm, Denmark's easternmost island, in a part of the Baltic Sea used by the Danish navy for military exercises. We left Bornholm's Nex harbor at 1 a.m. to reach the designated patch of ocean in time for a 9 a.m. launch, the time approved by Swedish air traffic control. (While our boats were in international waters, Sweden has oversight of the airspace above that part of the Baltic Sea.) Many of our crew members had spent the entire previous day testing the rocket's various systems and got no sleep before the launch. We were running on coffee.

When the Nex II blasted off, separating neatly from the launch tower, we all cheered. The rocket continued on its trajectory, jettisoning its nose cone when it reached its apogee of 6,500 meters, and sending telemetry data back to our mission control ship all the while. As it began to descend, it first deployed its ballute, a balloon-like parachute used to stabilize spacecraft at high altitudes, and then deployed its main parachute, which brought it gently down to the ocean waves.

In 2018, the Nex II rocket launched successfully [left] and returned safely to the Baltic Sea [right].

The launch brought us one step closer to mastering the logistics of launching and landing at sea. For this launch, we were also testing our ability to predict the rocket's path. I created a model that estimated a splashdown 4.2 km east of the launch platform; it actually landed 4.0 km to the east. This controlled water landingour first under a fully inflated parachutewas an important proof of concept for us, since a soft landing is an absolute imperative for any crewed mission.

This past April, the team tested its new fuel injectors in a static engine test. Carsten Olsen

The Nex II's engine, which we called the BPM5, was one of the few components we hadn't machined entirely in our workshop; a Danish company made the most complicated engine parts. But when those parts arrived in our workshop shortly before the launch date, we realized that the exhaust nozzle was a little bit misshapen. We didn't have time to order a new part, so one of our volunteers, Jacob Larsen, used a sledgehammer to pound it into shape. The engine didn't look prettywe nicknamed it the Franken-Enginebut it worked. Since the Nex II's flight, we've test-fired that engine more than 30 times, sometimes pushing it beyond its design limits, but we haven't killed it yet.

The Spica astronaut's 15-minute ride to the stars will be the product of more than two decades of work.

That mission also demonstrated our new dynamic pressure regulation (DPR) system, which helped us control the flow of fuel into the combustion chamber. The Nex I had used a simpler system called pressure blowdown, in which the fuel tanks were one-third filled with pressurized gas to drive the liquid fuel into the chamber. With DPR, the tanks are filled to capacity with fuel and linked by a set of control valves to a separate tank of helium gas under high pressure. That setup lets us regulate the amount of helium gas flowing into the tanks to push fuel into the combustion chamber, enabling us to program in different amounts of thrust at different points during the rocket's flight.

The 2018 Nex II mission proved that our design and technology were fundamentally sound. It was time to start working on the human-rated Spica rocket.

Copenhagen Suborbitals hopes to send an astronaut aloft in its Spica rocket in about a decade. Caspar Stanley

With its crew capsule, the Spica rocket will measure 13 meters high and will have a gross liftoff weight of 4,000 kilograms, of which 2,600 kg will be fuel. It will be, by a significant margin, the largest rocket ever built by amateurs.

The Spica rocket will use the BPM100 engine, which the team is currently manufacturing. Thomas Pedersen

Its engine, the 100-kN BPM100, uses technologies we mastered for the BPM5, with a few improvements. Like the prior design, it uses regenerative cooling in which some of the propellant passes through channels around the combustion chamber to limit the engine's temperature. To push fuel into the chamber, it uses a combination of the simple pressure blowdown method in the first phase of flight and the DPR system, which gives us finer control over the rocket's thrust. The engine parts will be stainless steel, and we hope to make most of them ourselves out of rolled sheet metal. The trickiest part, the double-curved "throat" section that connects the combustion chamber to the exhaust nozzle, requires computer-controlled machining equipment that we don't have. Luckily, we have good industry contacts who can help out.

One major change was the switch from the Nex II's showerhead-style fuel injector to a coaxial-swirl fuel injector. The showerhead injector had about 200 very small fuel channels. It was tough to manufacture, because if something went wrong when we were making one of those channelssay, the drill got stuckwe had to throw the whole thing away. In a coaxial-swirl injector, the liquid fuels come into the chamber as two rotating liquid sheets, and as the sheets collide, they're atomized to create a propellant that combusts. Our swirl injector uses about 150 swirler elements, which are assembled into one structure. This modular design should be easier to manufacture and test for quality assurance.

The BPM100 engine will replace an old showerhead-style fuel injector [right] with a coaxial-swirl injector [left], which will be easier to manufacture.Thomas Pedersen

In April of this year, we ran static tests of several types of injectors. We first did a trial with a well-understood showerhead injector to establish a baseline, then tested brass swirl injectors made by traditional machine milling as well as steel swirl injectors made by 3D printing. We were satisfied overall with the performance of both swirl injectors, and we're still analyzing the data to determine which functioned better. However, we did see some combustion instabilitynamely, some oscillation in the flames between the injector and the engine's throat, a potentially dangerous phenomenon. We have a good idea of the cause of these oscillations, and we're confident that a few design tweaks can solve the problem.

Volunteer Jacob Larsen holds a brass fuel injector that performed well in a 2021 engine test.Carsten Olsen

We'll soon commence building a full-scale BPM100 engine, which will ultimately incorporate a new guidance system for the rocket. Our prior rockets, within their engines' exhaust nozzles, had metal vanes that we would move to change the angle of thrust. But those vanes generated drag within the exhaust stream and reduced effective thrust by about 10 percent. The new design has gimbals that swivel the entire engine back and forth to control the thrust vector. As further support for our belief that tough engineering problems can be solved by smart and dedicated people, our gimbal system was designed and tested by a 21-year-old undergraduate student from the Netherlands named Jop Nijenhuis, who used the gimbal design as his thesis project (for which he got the highest possible grade).

We're using the same guidance, navigation, and control (GNC) computers that we used in the Nex rockets. One new challenge is the crew capsule; once the capsule separates from the rocket, we'll have to control each part on its own to bring them both back down to Earth in the desired orientation. When separation occurs, the GNC computers for the two components will need to understand that the parameters for optimal flight have changed. But from a software point of view, that's a minor problem compared to those we've solved already.

Bianca Diana works on a drone she's using to test a new guidance system for the Spica rocket.Carsten Olsen

My specialty is parachute design. I've worked on the ballute, which will inflate at an altitude of 70 km to slow the crewed capsule during its high-speed initial descent, and the main parachutes, which will inflate when the capsule is 4 km above the ocean. We've tested both types by having skydivers jump out of planes with the parachutes, most recently in a 2019 test of the ballute. The pandemic forced us to pause our parachute testing, but we should resume soon.

For the parachute that will deploy from the Spica's booster rocket, the team tested a small prototype of a ribbon parachute.Mads Stenfatt

For the drogue parachute that will deploy from the booster rocket, my first prototype was based on a design called Supersonic X, which is a parachute that looks somewhat like a flying onion and is very easy to make. However, I reluctantly switched to ribbon parachutes, which have been more thoroughly tested in high-stress situations and found to be more stable and robust. I say "reluctantly" because I knew how much work it would be to assemble such a device. I first made a 1.24-meter-diameter parachute that had 27 ribbons going across 12 panels, each attached in three places. So on that small prototype, I had to sew 972 connections. A full-scale version will have 7,920 connection points. I'm trying to keep an open mind about this challenge, but I also wouldn't object if further testing shows the Supersonic X design to be sufficient for our purposes.

We've tested two crew capsules in past missions: the Tycho Brahe in 2011 and the Tycho Deep Space in 2012. The next-generation Spica crew capsule won't be spacious, but it will be big enough to hold a single astronaut, who will remain seated for the 15 minutes of flight (and for two hours of preflight checks). The first spacecraft we're building is a heavy steel "boilerplate" capsule, a basic prototype that we're using to arrive at a practical layout and design. We'll also use this model to test hatch design, overall resistance to pressure and vacuum, and the aerodynamics and hydrodynamics of the shape, as we want the capsule to splash down into the sea with minimal shock to the astronaut inside. Once we're happy with the boilerplate design, we'll make the lightweight flight version.

Copenhagen Suborbitals currently has three astronaut candidates for its first flight: from left, Mads Stenfatt, Anna Olsen, and Carsten Olsen. Mads Stenfatt

Three members of the Copenhagen Suborbitals team are currently candidates to be the astronaut in our first crewed missionme, Carsten Olsen, and his daughter, Anna Olsen. We all understand and accept the risks involved in flying into space on a homemade rocket. In our day-to-day operations, we astronaut candidates don't receive any special treatment or training. Our one extra responsibility thus far has been sitting in the crew capsule's seat to check its dimensions. Since our first crewed flight is still a decade away, the candidate list may well change. As for me, I think there's considerable glory in just being part of the mission and helping to build the rocket that will bring the first amateur astronaut into space. Whether or not I end up being that astronaut, I'll forever be proud of our achievements.

The astronaut will go to space inside a small crew capsule on the Spica rocket. The astronaut will remain seated for the 15-minute flight (and for the 2-hour flight check before). Carsten Brandt

People may wonder how we get by on a shoestring budget of about $100,000 a yearparticularly when they learn that half of our income goes to paying rent on our workshop. We keep costs down by buying standard off-the-shelf parts as much as possible, and when we need custom designs, we're lucky to work with companies that give us generous discounts to support our project. We launch from international waters, so we don't have to pay a launch facility. When we travel to Bornholm for our launches, each volunteer pays his or her own way, and we stay in a sports club near the harbor, sleeping on mats on the floor and showering in the changing rooms. I sometimes joke that our budget is about one-tenth what NASA spends on coffee. Yet it may well be enough to do the job.

We had intended to launch Spica for the first time in the summer of 2021, but our schedule was delayed by the COVID-19 pandemic, which closed our workshop for many months. Now we're hoping for a test launch in the summer of 2022, when conditions on the Baltic Sea will be relatively tame. For this preliminary test of Spica, we'll fill the fuel tanks only partway and will aim to send the rocket to a height of around 30 to 50 km.

If that flight is a success, in the next test, Spica will carry more fuel and soar higher. If the 2022 flight fails, we'll figure out what went wrong, fix the problems, and try again. It's remarkable to think that the Spica astronaut's eventual 15-minute ride to the stars will be the product of more than two decades of work. But we know our supporters are counting down until the historic day when an amateur astronaut will climb aboard a homemade rocket and wave goodbye to Earth, ready to take a giant leap for DIY-kind.

This article appears in the December 2021 print issue as "The First Crowdfunded Astronaut."

HENRIK JORDAHN

Mads Stenfatt first contacted Copenhagen Suborbitals with some constructive criticism. In 2011, while looking at photos of the DIY rocketeers' latest rocket launch, he had noticed a camera mounted close to the parachute apparatus. Stenfatt sent an email detailing his concernnamely, that a parachute's lines could easily get tangled around the camera. "The answer I got was essentially, 'If you can do better, come join us and do it yourself,' " he remembers. That's how he became a volunteer with the world's only crowdfunded crewed spaceflight program.

As an amateur skydiver, Stenfatt knew the basic mechanics of parachute packing and deployment. He started helping Copenhagen Suborbitals design and pack parachutes, and a few years later he took over the job of sewing the chutes as well. He had never used a sewing machine before, but he learned quickly over nights and weekends at his dining room table.

One of his favorite projects was the design of a high-altitude parachute for the Nex II rocket, launched in 2018. While working on a prototype and puzzling over the design of the air intakes, he found himself on a Danish sewing website looking at brassiere components. He decided to use bra underwires to stiffen the air intakes and keep them open, which worked quite well. Though he eventually went in a different design direction, the episode is a classic example of the Copenhagen Suborbitals ethos: Gather inspiration and resources from wherever you find them to get the job done.

Today, Stenfatt serves as lead parachute designer, frequent spokesperson, and astronaut candidate. He also continues to skydive in his spare time, with hundreds of jumps to his name. Having ample experience zooming down through the sky, he's intently curious about what it would feel like to go the other direction.

From Your Site Articles

Related Articles Around the Web

Read this article:

AI Is Helping to Stop Animal Poaching and Food Insecurity - IEEE Spectrum

Posted in Ai | Comments Off on AI Is Helping to Stop Animal Poaching and Food Insecurity – IEEE Spectrum

Page 83«..1020..82838485..90100..»