SpaceX Tests Experimental Starlink Terminal That Uses 2 …

(Credit: SpaceX)

SpaceX is testing a new version of Starlink that operates via two satellite dishes instead of one.

The company revealed the experimental dish in an FCC filing last week, which was spotted by Wccftech. The document indicates the dish separates the transmitting and receiving antennas into two squares thatll communicate with SpaceXs satellite internet network. Each square measures 12.2 inches by 12.2 inches.

The design is notably different from the circular satellite dish design on a standard Starlink terminal, which the company has been distributing to thousands of eager customers. That dish, which measures 23 inches in diameter, contains both the transmitting and receiving antennas.

SpaceXs application to the FCC doesnt reveal much about the experimental dish or its purpose. The document merely says the company is seeking a six-month license to test the dish starting on July 10 in five states: California, Colorado, Utah, Texas, and Washington.

The tests requested here are designed to demonstrate the ability to transmit to and receive information from a fixed location on the ground, the application adds. SpaceX will test antenna equipment functionality and analyze data link performance of the user terminal.

The application was filed as SpaceX is rolling out Starlink across the globe to potentially millions of users in need of high-speed internet. To reach the goal, the company is trying to reduce the $499 upfront cost of each Starlink terminal, which includes the dish and a Wi-Fi modem.

The experimental dish could also represent SpaceXs attempt to upgrade speeds on the Starlink network. At the same time, the company is working to offer Starlink on moving vehicles, including boats and cars.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Link:

SpaceX Tests Experimental Starlink Terminal That Uses 2 ...

SpaceX Converts Oil Rig Into a Launch Pad for Starship in …

SpaceX is building an offshore launch pad for its Starship rocket in Mississippi, The Sun Herald first reported on Thursday.

Elon Musk's space company bought two oil rigs off the coast of Texas earlier this year with the intention of converting them into ocean spaceports. One of the rigs, Phobos, is now located in Pascagoula, a city in Jackson County, Mississippi, according to The Sun Herald.

It's unclear where the other launch pad, Deimos, will be situated. The ocean platforms, where Starship will blast off from, have been named after Mars' moons.

Shipbuilding and repair company ST Engineering Halter Marine & Offshore Inc. is working on a six-month project to remove drilling equipment from the Phobus oil rig.

"SpaceX is here in Pascagoula," Jeffrey Gehrmann, ST Engineering's senior vice president of operations, told The Sun Herald.

Gehrmann said the oil rig was towed in from Galveston, Texas, after SpaceX called ST Engineering to ask how much the company would charge to remove the drilling equipment from the oil rig.

"Apparently, our number was better than our competitors', and they brought it to us," he said.

Gehrmann couldn't go into further details about the project due to a nondisclosure agreement with SpaceX, The Sun Herald reported.

"This has the potential of being huge," Gehrmann said.

Musk said on May 30 that that Deimos is under construction and could begin launch operations next year.

Both Deimos and Phobus will serve as alaunch and landing platform for SpaceX's Starship, a spacecraft that Musk wants to send to Mars. This will be the first time that Starship takes off from an ocean launch pad.

These Starship offshore spaceports follow the success of SpaceX's ocean droneships, including "Just Read the Instructions" and "Of Course I Still Love You," which allow the recovery of Falcon 9 first stages in the Atlantic Ocean.

See the original post:

SpaceX Converts Oil Rig Into a Launch Pad for Starship in ...

Relativity Space raises $650 million from Fidelity and others to build 3D-printed SpaceX competitor – CNBC

An artist's rendition of a Terran R rocket launching to orbit.

Relativity Space

3D-printing specialist Relativity Space raised $650 million to step up work on a fully reusable rocket that will attempt to challenge Elon Musk's SpaceX in less than three years, the company announced on Tuesday.

The money will be used "to accelerate some of the production ramp rate and get to a higher launch cadence as quickly as we can, because the demand is certainly there for it," Relativity Space CEO Tim Ellis told CNBC.

Relativity's new capital will be focused on its Terran R rocket, a launch vehicle that would be similar in size and power to SpaceX's workhorse Falcon 9 rocket.

Terran R will carry 20 times more to orbit than Relativity's Terran 1 rocket, the latter of which the company is on track to launch for the first time by the end of this year. Additionally, Ellis said Terran 1's backlog of customer orders makes it "the most pre-sold rocket in history before launch."

The raise, which Ellis described as "war chest doubled," was led by Fidelity and comes eight months after Relativity brought in $500 million in a round led by Tiger Global. The $650 million in equity added BlackRock, Centricus, Coatue and Soroban Capital as new Relativity investors, with a host of existing investors including Fidelity, Tiger, Baillie Gifford, K5 Global, Tribe Capital, XN, Brad Buss, Mark Cuban, Jared Leto and Spencer Rascoff building on prior stakes.

Relativity has now raised $1.34 billion in capital since its founding in 2015, with its valuation climbing to $4.2 billion from $2.3 billion in November. Its headcount has grown to 400 people, with Ellis saying the company plans to "add several more hundred this year."

"We've signed up to create a lot of value, certainly remaining the second most highly valued space company in the world," Ellis said, as SpaceX commands an industry-leading $74 billion valuation.

A timelapse from inside of a 3D-printing bay shows the manufacturing process for a Terran 1 second stage flight tank:

Relativity Space

Relativity is building the first iteration of its Terran 1 rocket and has manufactured 85% of the vehicle for the inaugural launch. It uses multiple 3D-printers, all developed in-house, to build Terran 1 and will do the same for Terran R.

The rockets are designed to be almost entirely 3D-printed, an approach which Relativity says makes it less complex, and faster to build or modify, than traditional rockets. Additionally, Relativity says its simpler process will eventually be capable of turning raw material into a rocket on the launchpad in under 60 days.

"We're just seeing in the market that there needs to be another quickly-moving, disruptive launch company that's actually skating to where the puck is going," Ellis said.

He added that Relativity "never seriously considered the SPAC path," believing his company doesn't yet need to go public and can tap "almost limitless capital" in the private markets. A SPAC, or special purpose acquisition company, is a blank-check company that raises funding from investors to finance a merger with a private company to take it public.

Ellis noted that Relativity received higher fundraising offers than the one it accepted from Fidelity, but went with the firm as the lead due to its prestige and reputation.

Relativity Spaceranked No. 23on this year'sCNBC Disruptor 50list.

The row of two-story tall 3D printer bays at the company's headquarters.

Relativity Space

Relativity's Terran 1 rocket is designed to carry 1,250 kilograms to low Earth orbit. That puts Terran 1 in the middle of the U.S. launch market, in the "medium-lift" section betweenRocket Lab's ElectronandSpaceX's Falcon 9in capability.

But Terran R would go head-to-head with Falcon 9: Targeting a capability of more than 20,000 kilograms to low Earth orbit, almost as tall at 216 feet in length, slightly wider with a 16-foot diameter, and a similarly sized nosecone to carry satellites to space.

SpaceX's rocket features nine Merlin engines in the booster, each capable of about 190,000 pounds of thrust, while Relativity's Terran R booster will feature seven Aeon R engines that it says will be capable of 302,000 pounds of thrust each. Earlier this year Relativity completed a full duration test firing of a pathfinder engine, using liquid oxygen and liquid methane as its fuel.

Musk's company ships its Falcon 9 boosters via highways from its headquarters in California, and Ellis said Relativity will similarly send its Terran R boosters over land to the coast of Texas, before putting them on a barge to its engine testing facility in Mississippi and then on another barge to Florida.

Relativity is aiming to launch the first Terran R mission in 2024 from Cape Canaveral's LC-16 launchpad, where its first Terran 1 missions will also launch. While Relativity is "nearly out of physical space" in the headquarters it moved into last summer, Ellis said the company has the core infrastructure in place needed to manufacturing Terran R. It has five large scale 3D-printers and five smaller "development" printers, and plans to add two more development bays in the near future. But Ellis noted that the company completed work on a new 3D-printer head, which more than doubles its print speed.

"It's not just adding more printer hardware. We're also continuously using the data and learning of printing to actually speed up the process and also make changes to the printer design themselves," Ellis said.

Ellis emphasized that Terran R has been a part of the plan since Relativity's early days, as the company has seen strong "market interest and demand for creating this vehicle." Although he declined to disclose the name of the customer, Relativity has a "prominent" initial buyer for Terran R launches.

"We've actually been developing [Terran R] this the whole time, so in many ways I feel like this is a weight off my shoulders, a big reveal," Ellis said. "We just needed to get enough traction and resources to be in the spot where now we're going big."

An illustration of a Terran 1 rocket, left, next to a Terran R rocket and a silhouette of a person.

Relativity Space

Ellis said he is a "huge fan" of SpaceX's next-generation Starship rocket, which Musk's company is developing to be fully reusable hoping to make space travel more akin to air travel.

"We need a vehicle that's going to take people to Mars," Ellis said. "[Starship] is huge and I think that capability is necessary."

As Terran R aims to be fully reusable, Ellis described it as "more a miniature Starship than a Falcon 9 rocket." While SpaceX reuses the boosters of its Falcon 9 rockets, it has not been able to reuse the upper stages that carry satellites on to orbit. Relativity wants Terran R to be a "fresh look at what is the best possible" rocket by designing it to be fully reusable from the beginning.

Terran R's booster, or first stage, will use its engines to land standing upright and has features "that would be nearly impossible to produce without 3D-printing." Ellis said Relativity's long-term goal is to "get to hundreds to thousands of reuses" per rocket. Reusing the second stage will be the next challenge, with Relativity building it "out of a more exotic 3D-printed metal" to make it lighter and able to endure the intense temperatures of reentering the Earth's atmosphere.

"First stage reuse or even second stage may not work perfectly on the very first try, but every single launch attempt that we're bringing in revenue we're able to continue to develop reusability further," Ellis said.

A fully reusable rocket would also be able to deliver cargo quickly from one point on the Earth to another, a use the U.S. military has shown great interest in already with SpaceX's Starship.

"I think point-to-point space transportation is an interesting market that we're looking at" with Terran R, Ellis said.

More broadly, Ellis remains focused on helping to "build an industrial base on Mars" and believes both 3D-printing and fully reusable rockets are key to making that happen.

"No one else is doing full reusability and I think that that's a bit depressing there needs to be more companies actually trying to make the future happen in a big way," Ellis said. "What we're doing is extremely hard ,but we also have the best and most experienced team in the industry."

SIGN UPfor our weekly, original newsletter that goes beyond the list, offering a closer look at CNBC Disruptor 50 companies, and the founders who continue to innovate across every sector of the economy.

Become a smarter investor withCNBC Pro.Get stock picks, analyst calls, exclusive interviews and access to CNBC TV.Sign up to start afree trial today.

See more here:

Relativity Space raises $650 million from Fidelity and others to build 3D-printed SpaceX competitor - CNBC

Marc Benioff reveals investment in SpaceX, but says he’d need ‘a couple of Ativans’ to leave Earth – CNBC

A SpaceX Falcon 9 rocket with a Dragon 2 spacecraft carrying supplies to the International Space Station lifts off from pad 39A at the Kennedy Space Center. This is the 22nd resupply mission for NASA by SpaceX.

Paul Hennessy | LightRocket | Getty Images

Salesforce CEO Marc Benioff said Monday that he's bullish on space, noting that he's an investor in Elon Musk's SpaceX along with start-ups Astra, Swarm Technologies and Planet Labs.

But don't expect Benioff to join Jeff Bezos on a space trip anytime soon. Bezos, who is stepping aside as Amazon CEO in July, said Monday that he'll fly on the first passenger flight of his space company Blue Origin, which is expected to launch on July 20.

In an interview that aired on CNBC's "Closing Bell," Benioff commended Bezos on the announcement.

"I think it's very exciting that he's willing to basically say, 'If you want to use my product, I will use it first,'" Benioff said. "And I think that that's 100% the right move."

But he's not sure he's personally interested in taking a similar trip.

"I think I might have to take a couple of Ativans before I climb in there," Benioff said. (Ativan is an anti-anxiety medication.)

One of Benioff's space investments, Astra, came out of stealth early last year. Astra said in February that it's going public through a special purpose acquisition company (SPAC) that values the business at $2.1 billion. On Monday, the company announced the acquisition of electric propulsion maker Apollo Fusion.

While Benioff's investment in Astra has been reported, his involvements in the other three companies he named have not been disclosed, and none are included among his 122 deals listed by PitchBook.

Most notable is SpaceX, the private space company that was valued by investors earlier this year at $74 billion. Benioff has commended SpaceX in the past, including a retweet of Musk in May 2020, in which Benioff said "visionary leadership." That was just as SpaceX was preparing to launch astronauts into space.

Benioff also said he's an investor Planet Labs, whose satellite technology takes images from space, and Swarm, which aims to provide internet connectivity from satellites.

"I actually think that space is a huge category that we should invest in," Benioff said, noting that it's an area where Time Ventures, his VC arm, is active. "I think those companies are amazing in the work they're doing. and the entrepreneurs."

CNBC's Michael Sheetz contributed to this report.

WATCH: Marc Benioff says company is about to surpass SAP

View original post here:

Marc Benioff reveals investment in SpaceX, but says he'd need 'a couple of Ativans' to leave Earth - CNBC

SpaceX sends cargo to the space station, with the company on a record launch pace for 2021 – CNBC

[The livestream has ended. A replay is available above.]

SpaceX sent the latest cargo mission for NASA to the space station on Thursday, with Elon Musk's company completing its 17th launch this year.

The company's Falcon 9 rocket took off at 1:29 p.m. EDT from Kennedy Space Center in Florida. The mission, called CRS-22, has SpaceX's Cargo Dragon spacecraft carrying more than 7,300 pounds of research and supplies to the International Space Station.

A few minutes after the launch, SpaceX landed the Falcon 9 booster the largest, bottom portion of the rocket on an autonomous ship in the Atlantic Ocean. The Cargo Dragon capsule separated from the rocket about 12 minutes after liftoff, with the spacecraft expected to dock with the ISS on Saturday.

During a pre-launch press conference, SpaceX director of Dragon mission management Sarah Walker noted that CRS-22 is the fifth Dragon capsule the company has sent to the International Space Station in the past 12 months. The company has launched multiple crew and cargo missions in the past year, with a full slate in the year ahead as well.

Additionally, CRS-22 is SpaceX's 17th mission of 2021. The company is on a blistering launch pace, as missions are going up an average of every nine days since 2021 began.

SpaceX's current pace puts it on track to conduct about 40 launches this year, which would easily top its annual record of 26 launches set last year. It has launched 119 of its Falcon 9 rockets to date, landed 79 of the Falcon 9's boosters, and reused boosters for 61 missions.

The company's Cargo Dragon spacecraft rolls out to the launchpad in Florida atop a Falcon 9 rocket.

SpaceX

Walker also pointed out that CRS-22 is the first mission of this year to launch on a new Falcon 9 rocket booster, as the company has been reusing boosters for all its recent missions.

"We're actually surprised when we get to a mission [in which we're] flying a new booster," Walker said.

CRS-22 carries dozens of research investigations for the astronauts on the ISS, including experiments about the survival of tardigrades in space, a portable ultrasound device, robotic operations demonstrations and more. Cargo Dragon is also bringing the first two of six new solar arrays called iROSA, built through Boeing and space infrastructure conglomerate Redwire Space. The new solar arrays are expected to improve the ISS' power generation by 20% to 30%.

This Cargo Dragon spacecraft is expected to return to Earth in July, splashing down in the Atlantic Ocean off the coast of Florida with 5,300 pounds of experiments and cargo.

Become a smarter investor withCNBC Pro.Get stock picks, analyst calls, exclusive interviews and access to CNBC TV.Sign up to start afree trial today.

Read this article:

SpaceX sends cargo to the space station, with the company on a record launch pace for 2021 - CNBC

The Pentagon wants to use private rockets like SpaceX’s Starship to deliver cargo around the world – CNBC

Starship prototype SN10 fires its three Raptor engines as it comes in for the landing.

SpaceX

The U.S. Air Force said Friday is expanding a small development program that wants to leverage reusable rockets, like those SpaceX is building, to deliver cargo quickly to anywhere in the world.

Called Rocket Cargo, the experimental military program will be led by the U.S. Space Force, the Pentagon said. The program will research and help develop capabilities such as landing "a rocket on a wide range of non-traditional materials and surfaces," engineering "a rocket cargo bay and logistics for rapid loading and unloading" and air-dropping "cargo from the rocket after re-entry in order to service locations where a rocket or aircraft cannot possibly land."

The Air Force's 2022 budget proposal requested almost $50 million for Rocket Cargo to continue the study concept work it began last year with small contracts to SpaceX and Exploration Architecture Corporation (XArc).

Rocket Cargo effectively describes the Starship rockets that SpaceX is developing, as the military program will look at fully reusable private rockets that can launch between 30 and 100 tons. Currently, Starship is the only rocket in development that plans to both be reused and can both launch that much mass.

Point-to-point space travel is a form of transportation, in which a rocket would launch into space and then return at another location, making it hypothetically capable of bringing supplies or possibly people from one side of the Earth to the other in under an hour.

SpaceX has been testing prototypes of Starship at its facility in Texas, most recently landing and recovering prototype SN15 after a high-altitude flight test. While SpaceX is aiming to accomplish a feat no previous rocket has achieved reusing rockets quickly to make spaceflight more akin to air travel, instead of the traditional approach of discarding the rocket after launch the last high-altitude flight test was the first that ended without the prototype exploding. The company has yet to reach orbit with the rocket.

Dr. Greg Spanjers, the Air Force Research Laboratory's leader on the Rocket Cargo program, gave NASA's Human Landing Systems program competition as an example of companies working on "viable" options of the Rocket Cargo capability. That NASA program, which is focused on building lunar landers that carry crew to the moon's surface, featured three teams led by Elon Musk's SpaceX, Jeff Bezos' Blue Origin and Leidos' subsidiary Dynetics. But Spanjers said the Air Force has "talked to many more companies than that" about the Rocket Cargo program.

"We talked to a number of providers that we see potentially coming to the table to compete for these contracts," Spanjers said Friday. "SpaceX is certainly the most visible, no question about it [but] what you're trying to do is go into an orbital or a suborbital trajectory, bring the payload back down, and land it on the planet Earth. There are multiple companies that have that technological capability today, not just SpaceX."

The Air Force declined to specify which companies it has talked to about the Rocket Cargo program, with Spanjers saying it is not "appropriate" before the Pentagon begins the contracting process. The contract solicitation is planned to begin very soon, although the Air Force declined to provide a date.

Additionally, the Air Force is willing to consider companies for Rocket Cargo which are not yet developing a point-to-point fully reusable capability.

"Today we are going to build the interfaces and the inroads to encourage more and more companies to enter into that realm. Hopefully they perceive a return on investment, in a business case that's approved by the [Department of Defense] expressing interest in buying the service down the road," Spanjers said.

Become a smarter investor withCNBC Pro.Get stock picks, analyst calls, exclusive interviews and access to CNBC TV.Sign up to start afree trial today.

View post:

The Pentagon wants to use private rockets like SpaceX's Starship to deliver cargo around the world - CNBC

Everything You Need to Know About the Rise of SpaceX – Analytics Insight

SpaceX that is trending on Twitter and Youtube, is an innovative and ambitious private aerospace manufacturer. Founded by Elon Musk, the aerospace manufacturer came into existence in 2002. SpaceX aerospace rose to fame and prominence in 2017 when it became amongst the first aerospace startup to have accomplished 18 successful launches.

Elon Musk, besides his wealth possessions, is also known and praised for his eccentric and innovative ideas. According to Musk, space travel is unnecessarily expensive whereas it can be cut down. Curtailing the space travel costs stands as the primary objective of SpaceX. Musks idea of curtailing costs involves the concept of sustainability. He highlights and emphasizes the sustainability cycle of reusing parts of rockets instead of investing in new ones. This will help in reducing the costs.

SpaceX holds the record of delivering 48 satellites into orbit and around 202,700 supplies to the international space station. In the present times, the aerospace manufacturer holds over 60% of global share contracts.

SpaceX is amongst the first aerospace manufacturers to preach and practice the reuse of rocket parts for future use. A pertinent example is the Falcon 9 rocket, which can be used over 100 times without the need for replacement. In the year 2019, the rocket was launched and landed on Mars for the fourth time. This encouraged the team to recycle the rocket parts more that can reduce costs to great degrees.

Speculatively, Elon Musk is making rapid and significant progress in his mission to colonize Mars and save humanity from the possible chances of extinction. Elon Musk decided the trajectory of his intergalactic inhabitation after enhancing his knowledge and wealth through Tesla and PayPal.

SpaceX is aiming for a swift timeframe within which there will be a human footstep on Mars. SpaceX aims to start human habitation on Mars by ten launches. To accomplish this mission, the company is also building a factory at the Port of Los Angeles where engineers are working on a nearly 20,000 square foot tent to establish a prototype spaceship. This prototype is built with carbon fiber materials. Additionally, SpaceX is also affiliating with NASA to conduct Mars mission workshops.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

See the original post:

Everything You Need to Know About the Rise of SpaceX - Analytics Insight

See the citizen astronauts of Inspiration4 learn how to fly a SpaceX Dragon (photos) – Space.com

The citizen astronauts flying to space with Inspiration4 this September are hard at work training for their mission at SpaceX HQ.

This September, four private astronauts will zoom around Earth aboard a SpaceX Crew Dragon spacecraft on the three-day Inspiration4 mission, which was booked by tech billionaire (and crew commander) Jared Isaacman.

The crew also includes childhood bone cancer survivor Hayley Arceneaux, a physician's assistant at St. Jude's Children's Research Hospital; data engineer Chris Sembroski; and geoscientist, science communicator and artist Sian Proctor. All four folks have been hard at work training for the flight. Most recently, Isaacman and Proctor have been "learning to fly a Dragon."

Related: Meet the contest-winning crew of Inspiration4

Isaacman and Proctor, who will serve as pilot for the mission, have been training at SpaceX's headquarters in Hawthorne, California. The pair have been getting acquainted with the Crew Dragon because, while it does fly autonomously, they have to know the craft inside and out and be prepared for any possible scenario.

"Crew training continues for #Inspiration4 commander @rookisaacman and pilot @DrSianProctor, with more time spent in @SpaceX simulators to familiarize the team with various aspects of flying Dragon!" team members said via Inspiration4's Twitter account about the training progress that they are making with Crew Dragon.

The post included photos of Isaacman and Proctor in SpaceX spacesuits and training in simulations to prepare to fly a Crew Dragon capsule. The pair look focused on the task at hand as they work hard for their flight.

Inspiration4 is set to be the first-ever crewed space mission that will launch without any "professional astronauts" on board, so the foursome will have to be extra prepared for their orbital journey. Additionally, the mission is designed to raise money and support for St. Jude Children's Research Hospital. Each crew member represents a different "pillar" leadership, hope, generosity and prosperity.

Email Chelsea Gohd at cgohd@space.com or follow her on Twitter @chelsea_gohd. Follow us on Twitter @Spacedotcom and on Facebook.

Excerpt from:

See the citizen astronauts of Inspiration4 learn how to fly a SpaceX Dragon (photos) - Space.com

It’s Launch Day! Here’s what you need to know for today’s SpaceX launch – Florida Today

FLORIDA TODAY's Rob Landers brings you some of today's top stories with this afternoon update of the News in 90 Seconds. Florida Today

Note: Watch Thursday's launch live here.

Support local journalism. Unlock digital access to floridatoday.com for just $1 for six months. Click here.

It'slaunch day!

SpaceX is on track to launch its Falcon 9 rocket from Kennedy Space Center's Pad 39A

Here's what you need to know for today's launch:

Liftoff is scheduled for 1:29p.m.

The 230-foot-tall Falcon 9 rocket will ferry a Cargo Dragon capsule to the International Space Station on a resupply mission.

The launch window is instantaneous. Meaning, the rocket goes at 1:29 p.m. or it doesnt.

Weather forecast is 60% "go" at the launch pad.

Approximately eight minutes after liftoff, the Falcon 9 rocket's first-stage booster will target an automatic landing on a drone ship in the Atlantic Ocean.

In the event of a scrub, teams have a backup opportunity at 1:03 p.m. Friday.

If SpaceX sticks the landing, the private space company will refurbish the booster and use it for the Crew-3 flight of astronautsRaja Chari,Thomas Marshburn,Matthias Maurer, and Kayla Barron currently planned for late October.

The mission marks the 22nd ISS resupply for SpaceX, which has been delivering cargo and science experiments to low-Earth orbit since 2012.

Full coverage of the launch kicks off at 12p.m. Thursday atfloridatoday.com/spaceand will feature in-depth coverage. Ask our space teamreporterEmre Kellyquestions and strike up a conversation. We will also be hosting SpaceX's live webcast of the launch.

Rob Landers is a USA TODAY Network of Florida multimedia journalist. You can reach him at rlanders@floridatoday.com

Help support our journalism. Subscribe today.

Read or Share this story: https://www.floridatoday.com/story/news/local/2021/06/03/what-you-need-know-todays-spacex-launch-space-florida/7506890002/

Read the original post:

It's Launch Day! Here's what you need to know for today's SpaceX launch - Florida Today

Is the Space Force about to acquire SpaceX Starships? | TheHill – The Hill

Eric Berger over at Ars Technica has noticed something in the Department of the Air Force section of President BidenJoe BidenHouse Judiciary Democrats call on DOJ to reverse decision on Trump defense Democratic super PAC targets Youngkin over voting rights Harris dubs first foreign trip a success amid criticism over border MOREs fiscal 2022 budget proposal. The Air Force is proposing to spend money to study how the Starship rocket being developed by SpaceX could be used to deliver 100 tons of cargo anywhere in the world within one hour. The Starship as a point-to-point cargo hauler may be just the first task that the SpaceX rocket shipis asked to perform.

Certainly, the military would appreciate having the ability to send supplies to any place in the world within an hour. The practical problems of making the Starship work as a cargo hauler would be formidable. A single insurgent with a ground-to-air missile might turn a landing into a fireball.

The Motley Fool, a private investment advice company, is quite bullish on the military potential of the Starship. The company envisions the SpaceX rocket ship performing a variety of military missions from low Earth orbit to the vicinity of the moon. Starship could be used as a mobile, versatile reconnaissance platform, using its store of fuel and six vacuum-optimized Raptor engines to maneuver where it needs to go.

The SpaceX Starship could perform a number of other military missions, such as striking at the space assets of enemy nations in times of war and defending American satellites and other space-based installations. The rocket ship could refuel American satellites, extending their operational lifespans. It could even be used to help clean up space debris. The Space Force would thus grow from a handful of personnel manning consoles and conducting planning meetings to a true war fighting branch of the military.

The Starship, currently under development at the SpaceX testing facility in Boca Chica, Texas, is the instrument of company CEO Elon MuskElon Reeve MuskOn The Money: Biden ends infrastructure talks with Capito, pivots to bipartisan group | Some US billionaires had years where they paid no taxes: report | IRS to investigate leak Feds looking into release of wealthy Americans' tax info Some US billionaires had years where they paid no taxes: report MOREs dream to build a settlement on Mars. Musk envisions the rocket ship taking settlers and the supplies they need to survive to the red planet, making a new branch of human civilization.

NASA is already so impressed by the Starship that it has contracted SpaceX to build a lunar-landing version of it to return astronauts to the moon as early as 2024. The selection has enraged Musks rivals such as Blue Origins Jeff BezosJeffrey (Jeff) Preston BezosSenate passes long-delayed China bill On The Money: Biden ends infrastructure talks with Capito, pivots to bipartisan group | Some US billionaires had years where they paid no taxes: report | IRS to investigate leak Feds looking into release of wealthy Americans' tax info MORE and has perturbed some members of Congress. Both have only themselves to blame Blue Origin for offering an inferior design and Congress for underfunding the Human Landing System project.

Military technology development has often been defined by the advent of new ways to transport people and cargo. The racing galleon of the 16th century became the frigates and ships of the line that defined naval warfare in the 18th and early 19th centuries. The steam engine and iron and steel armor led to the dreadnoughts of the early 20th century. Modern warships incorporate nuclear power. Air travel has caused the same sort of evolution, from the motorized kites of World War I to modern jets that can deliver destruction and death from thousands of miles away.

Now, space transportation technology is poised to cause a similar revolution in the militarys ability to defend the United States and its allies and to inflict mayhem and death on any enemy that would propose to make war on America. The great irony is that the Starship will be used by a branch of the military that Musk once compared to Starfleet, the fictional service depicted in the "Star Trek" television shows and movies. The thought would likely bring a smile to the face of the franchises creator, Gene Roddenberry, in whatever afterlife one envisions him inhabiting.

No doubt entire libraries will be written about how life has started to imitate art in this way. As a practical matter, the United States, by being the first to develop a true war fighting capability beyond the Earths atmosphere, will have ensured its survival as a free society and the dominant superpower. Friends of America should take comfort in this fact. American power has, by and large, been a force for good.

Americas enemies, though, should take caution. Their ability to make trouble is about to be further circumscribed. No other countryhasthe capabilities that the SpaceX Starship will provideor is likely to for quite some time to come.

Mark Whittington, who writes frequently about space and politics, has published a political study of space explorationtitled"Why is It So Hard to Go Back to the Moon?" as well as "The Moon, Mars and Beyond" and, most recently, Why is America going back to the Moon. He blogs at Curmudgeons Corner. He is published in The Wall Street Journal, Forbes, The Hill, USA Today, the Los Angeles Times and The Washington Post, among other venues.

Visit link:

Is the Space Force about to acquire SpaceX Starships? | TheHill - The Hill

USAF wants to deliver cargo around the world with reusable rockets – Aerospace Manufacturing

The US Air Force has announced the development of a new type of rocket-powered transporter to deliver cargo around the world.

The Air Force Research Laboratory will determine whether large commercial rockets like SpaceXs Starship launch vehicle could be used for Department of Defense global logistics.

AFRL will research and develop the aspects needed to leverage the new commercial capability for the DoD logistics mission. This includes the ability to land a rocket on a wide range of non-traditional materials and surfaces, including at remote sites.

In addition, AFRL scientists and engineers will research the ability to safely land a rocket near personnel and structures, engineer a rocket cargo bay and logistics for rapid loading and unloading, and air drop cargo from the rocket after re-entry in order to service locations where a rocket or aircraft cannot possibly land.

A USAF illustration of the Rocket Cargo concept, which bears a striking resemblance to SpaceXs Starship design

The Air Force has provided rapid global mobility for decades and Rocket Cargo is a new way the Department can explore complementary capabilities for the future, said acting secretary of the Air Force John Roth. Vanguard initiatives lead to game-changing breakthroughs that preserve our advantage over near-peer competitors, and this latest addition is also a significant milestone as the first Vanguard evaluated under the Space Forces oversight.

Based on the commercial capability and business objectives, the AFRL is currently assessing emerging rocket capability across the commercial vendor base, and its potential use for quickly transporting DoD materiel to ports across the globe.

The Rocket Cargo Vanguard is a clear example of how the Space Force is developing innovative solutions as a service, in particular the ability to provide independent options in, from, and to space, said Chief of Space Operations Gen. John W. Jay Raymond. Once realized, Rocket Cargo will fundamentally alter the rapid logistics landscape, connecting materiel to joint warfighters in a fraction of the time it takes today. In the event of conflict or humanitarian crisis, the Space Force will be able to provide our national leadership with an independent option to achieve strategic objectives from space.

Historically the high costs of launch have been prohibitive for a logistics-focused application, and the relatively small payload capability constrained the types of cargo that could be delivered, also limiting its suitability. Today several commercial companies are quickly generating new opportunities by developing large rockets and reusable stages that safely land back on earth, expanding cargo capacity and dramatically reducing launch costs.

Under the new Rocket Cargo Vanguard, the DAF will seek to leverage these commercial advances and position the DoD to be an early adopter of the new commercial capability.

This approach is a marked departure from the past, where the US government led rocket technology development and bore the brunt of the cost. Today, with the commercial space launch providers developing the advanced rockets, the DAF will instead primarily invest to quickly adapt the capability to the DoD logistics missions, and then be the first customer procuring the new commercial capability through service leases.

http://www.af.mil

Michael Tyrrell

Digital Coordinator

Subscribe to our FREE Newsletter

Go here to see the original:

USAF wants to deliver cargo around the world with reusable rockets - Aerospace Manufacturing

SpaceX Dragon delivers solar arrays to the International Space Station – Electrek.co

An uncrewed SpaceX Dragon CRS-22cargo ship launched on a Falcon 9 rocket from NASAs Kennedy Space Center on June 3 and arrived at the International Space Station (ISS) two days later. It was carrying, among many other things, two new solar arrays that will power the ISS.

Jacksonville-headquartered Redwire Space is an aerospace manufacturer and space infrastructure technology company. Redwire is under contract with Boeing, NASAs prime contractor for space station operations, to provide six International Space Station Roll-Out Solar Arrays (iROSA).

The Dragon delivered two out of six iROSA on this trip. They were rolled up and stored in a compact cylinder (pictured above). The other four solar arrays will arrive at the International Space Station by 2023.

Redwireis responsible for the design, analysis, manufacture, test, and delivery of iROSA. EachiROSAuses upgraded solar cells from BoeingsSpectrolab and provides28 kilowatts of power. The six arrays will together produce more than 120 kilowatts that will boost the stations power generation by 20-30%. ROSA technology was successfullydemonstrated onthe ISSin June 2017. Heres a good look at Redwires ISS solar array technology in this video:

Redwire explains how it works:

ROSAs patented design uses composite booms to serve as both the primary structural elements and the deployment actuator together with a modular photovoltaic blanket assembly that can be configured into various solar array architectures. Instead of using complex mechanisms and motors for deployment, ROSA uses stored energy from carbon fiber booms that are cut and rolled back against their natural shape for storage. At a designated point during the mission, the stored strain energy of the booms enforces the unrolling deployment actuation. When fully deployed, the now rigid booms provide the solar arrays structural stiffness and strength.

NASA astronaut Shane Kimbrough said [via Space.com]:

Looking forward to all the science and other goodies that it brought up along with our EVA solar arrays. Its going to be a great few weeks as we get into Dragon and get things out.

Astronauts will install the two iROSA on spacewalks on June 16 and 20.

FTC: We use income earning auto affiliate links. More.

Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.

Read more from the original source:

SpaceX Dragon delivers solar arrays to the International Space Station - Electrek.co

Weekly Bytes | SpaceXs floating spaceport, Ransomware attack on Fujifilm, and more – The Hindu

Here's our curated list of important tech news from this week in byte size.

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

Earlier this year, SpaceX bought two oil rigs in the Port of Brownsville, Texas, with a plan to convert them into floating launch pads for the Starship. SpaceX CEO Elon Musk on Monday confirmed the company has started building one of the floating spaceports that may be launch-ready in 2022. Ocean spaceport Deimos is under construction for launch next year, Musk wrote on Twitter. The spaceports have been named after the two moons of Mars, Deimos and Phobos. SpaceX is building the Starship to carry both crew and cargo to Earth orbit, the Moon, Mars and beyond. It also plans to use the Starship and floating spaceports for hypersonic travel around Earth. Sometime in the future Starship will be capable of taking people from any city to any other city on Earth in under one hour, according to the aerospace firm. In another space update, NASA announced plans to launch two missions to Venus between 2028 and 2030, its first in decades, to study the atmosphere and geologic features of Earths so-called sister planet.

Fujifilm corporation on Wednesday said that it became aware of a possible ransomware attack in the late evening of June 1. Two days later, the firm confirmed it and said the impact of the unauthorised access is confined to a specific network in Japan. A special Task Force, including external experts, was immediately established, and all networks and servers were shut down to determine the extent and the scale of the issue, the Japanese company said in a statement. Starting today [June 4], with a clear understanding of the extent of the impact, we have begun to bring the network, servers, and computers confirmed safe back into operation. The Japanese multinational conglomerate is the latest victim of a cyberattack amid a growing number of ransomware attacks that have become a major cause of concern for organisations across the globe. This week, a ransomware attack on the worlds largest meat processing company disrupted production around the world just weeks after a similar incident shut down a U.S. oil pipeline.

Twitter on Thursday said it is partnering with organisations committed to tackling climate change and making it easier to find credible climate information from global experts. The micro-blogging platform is collaborating with organisations like Earth Day Network, UN Environment Programme, UN Framework Convention on Climate Change, Greenpeace, Voice for the Planet, Let Me Breathe, WWF, 350.org, and FridaysForFuture. Also, starting this week, users can follow the Climate Change Topic to find personalised conversations about climate change, including Tweets from environmental and sustainability organisations, environmental activists, and scientists, it noted. In a separate development, Twitter resumed accepting requests for its verification programme after pausing it due to an overflow of requests. In another update, Twitter unveiled its new subscription-based service that will grant users access to exclusive features, including one to undo a published tweet within 30 seconds of posting.

Microsoft on Wednesday said it is expanding the Airband Initiative to U.S. cities that face some of the largest broadband gaps among racial and ethnic minorities, specifically Black and African American communities. The initiative was launched four years ago to improve broadband access in rural areas. Our approach focuses on providing access to affordable broadband, devices and digital skilling tools and resources in eight cities, including aiding in the digital transformation of the institutions that support these communities, Vickie Robinson, General Manager, Airband Initiative, said in a blog post. The initiative will be extended to communities in Atlanta, Cleveland, Detroit, El Paso, Los Angeles, Memphis, Milwaukee and New York City. Federal Communications Commission claims that more than 14.5 million people in the U.S. do not have broadband access. This week, Morgan Stanley and Microsoft announced a cloud partnership to support the formers digital transformation and push the financial services industry forward.

Mozilla on Tuesday released a redesigned version of Firefox with a modern and cleaner look to offer users a fresh new web experience. The browsers tabs are now bigger and float neatly above the toolbar. We detached the tab from the browser to invite you to move, rearrange and pull-out tabs into a new window to suit your flow, and organise them so theyre easier for you to find, Mozilla noted in a blog post. The menus are streamlined with labels that are clear and easier to understand, and fewer icons for better navigation. The browser also got a new simplified toolbar, and consolidated panels for notifications like microphone and camera permissions, enabling users to get to all their web calls and meetings with fewer clicks. The redesigned version of Firefox, which also improves privacy protection in Private Browsing mode, is available for desktop and mobile devices. Next year, Google plans to phase out technology in its Chrome browser that lets other companies track users' web browsing. Heres how brands may target ads after the death of browser cookies.

Samsung on Thursday unveiled two new Galaxy Book devices, expanding its PC line-up. The Galaxy Book Go series is powered by the Qualcomm Snapdragon 7c Gen 2, with optional LTE connectivity, and the Galaxy Book Go 5G powered by the Qualcomm Snapdragon 8cx Gen 2 5G, promises to deliver lightning-fast 5G connectivity speeds. The Galaxy Book Go series features a 14-inch FHD display, 720p HD camera, 180-degree folding hinge, Dolby Atmos for rich soundscape, 4GB or 8GB (LPDDR4X) RAM, 64GB or 128GB (eUFS) storage, 42.3 Watt-hour battery, and builds on Windows 10 experiences with the benefit of instant boot speeds, allowing users to open it and immediately use their PC. The Galaxy Book Go Wi-Fi and LTE versions will be available from June in select markets starting at $349 (about 25,400), while the Galaxy Book Go 5G will be available later this year, Samsung noted. In another gadget update, Alienware, a favourite among the gaming community, expanded its product portfolio with the launch of new X-Series gaming laptops.

You can read more at thehindu.com/technology

Read the original post:

Weekly Bytes | SpaceXs floating spaceport, Ransomware attack on Fujifilm, and more - The Hindu

artificial intelligence | Definition, Examples, and …

Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasksas, for example, discovering proofs for mathematical theorems or playing chesswith great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

Top Questions

Artificial intelligence (AI) is the ability of a computer or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment.Although there are no AIs that can perform the wide variety of tasks an ordinary human can do, some AIs can match humans in specific tasks.

No, artificial intelligence and machine learning are not the same, but they are closely related. Machine learning is the method to train a computer to learn from its inputs but without explicit programming for every circumstance. Machine learning helps a computer to achieve artificial intelligence.

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasps instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligenceconspicuously absent in the case of Sphexmust include the ability to adapt to new circumstances.

Psychologists generally do not characterize human intelligence by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution. This simple memorizing of individual items and proceduresknown as rote learningis relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped, whereas a program that is able to generalize can learn the add ed rule and so form the past tense of jump based on experience with similar verbs.

Excerpt from:

artificial intelligence | Definition, Examples, and ...

Artificial Intelligence What it is and why it matters | SAS

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isnt that scary or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

Why is artificial intelligence important?

Original post:

Artificial Intelligence What it is and why it matters | SAS

What you need to know about data fluency and federated AI – Healthcare IT News

Sharecare is a digital health company that offers an artificial intelligence-powered mobile app for consumers. But it has a strong viewpoint on AI and how it is used.

Sharecare believes that while other companies use augmented analytics and AI to understand data with business intelligence tools, they are missing out on the benefits of data fluency and federated AI. By using federated AI and data fluency, Sharecare says it digs deeper to find hidden similarities in the data that business intelligence tools would not be able to detect in health settings.

To gain a deeper understanding of data fluency and federated AI, Healthcare IT News sat down with Akshay Sharma, executive vice president of artificial intelligence at Sharecare, for an in-depth interview.

Q: What exactly is federated AI, and how is it different from any other form of AI?

A: Federated AI, or federated learning, guarantees that the user's data stays on the device. For example, the applications that run specific programs on the edge of the network can still learn how to process the data and build better, more efficient models by sharing a mathematical representation of key clinical features, not the data.

Traditional machine learning requires centralizing data to train and build a model. However, with edge AI and federated learning combined with other privacy-preserving techniques and zero trust infrastructure, it's possible to build models in a distributed data setup while lowering the risk of any single point of attack.

The application of federated learning also applies in cloud settings where the data doesn't have to leave the systems on which it exists but can allow for learning. We call this federated cloud learning, which organizations can use to collaborate, keeping the data private.

Q: What is data fluency, and why is it important to AI?

A: Data fluency is a framework and set of tools to rapidly unlock the value of clinical data by having every key stakeholder participate simultaneously in a collaborative environment. A machine learning environment with a data fluency framework engages clinicians, actuaries, data engineers, data scientists, managers, infrastructure engineers and all other business stakeholders to explore the data, ask questions, quickly build analytics and even model the data.

This novel approach to enterprise data analytics is purpose-built for healthcare to improve workflows, collaboration and rapid prototyping of ideas before spending time and money on building models.

Q: How do data fluency platforms enable analysts, engineers, data scientists and clinicians to collaborate more easily and efficiently?

A: Traditional healthcare systems are very siloed, and many organizations struggle to discover the value within their data and unlock actionable trends and clinical insights. Not only are data creation systems and teams isolated from data transformation systems and teams, but engineers and data scientists use coding languages while clinicians and finance teams use Word or Excel.

The disconnect creates a situation where the data knowledge is translated outside of the programming environment. The transformations between system boundaries are lossy and without feedback loops to improve an algorithm or the code. Yet, all stakeholders need early and iterative access to the data to build health algorithms effectively and with greater transparency.

The modern healthcare stack facilitates the collaboration of cross-functional teams from a single, data-driven point of view in Python Notebooks with a UI for non-engineering partners. Building AI models can be time-consuming and expensive to build, and it is essential to hedge your bets by getting early prototype input across domains of expertise.

Data fluency provides an environment for critical stakeholders to discover the value on top of the data or insights and in a real-time, agile and iterative way. The feedback from non-engineering teams is immediate and can help improve the underlying model or code in the notebook instantaneously.

Each domain expert can have multiple data views that facilitate deep collaboration and data insight discovery, enabling the continuous learning environment from care to research and from research to care. Data fluency works with cloud-native architectures, and many of the techniques can also automatically extend to computing on edge, where the patient and their data reside.

Q: Why do you say the future of analytics in healthcare is federated AI and data fluency?

A: Traditional analytics in healthcare is rooted in understanding a given set of data by using business intelligence-focused tools. The employees using these tools are not typically engineers but analysts, statisticians and business users.

The problem with traditional enterprise data analytics is that you don't learn from data; you only understand what's in it. To learn from data, you have to bring machine learning into the equation and effective feedback loops from all relevant stakeholders.

Machine learning helps surface hidden patterns in the data, especially if there are non-linear relationships that aren't easily identifiable to humans. Proactive collaboration at the data layer provides transparency into how the models or analytics metrics are built and makes it easier to unravel bias or assumptions and correct them in real time.

Federated AI and data fluency also address the barriers to data acquisition, which are often not technological, but instead include privacy, trust, regulatory compliance and intellectual property. This is especially the case in healthcare, where patients and consumers expect privacy with respect to personal information and where organizations want to protect the value of their data and are also required to follow regulatory laws such as HIPAA in the United States and the GDPR [General Data Protection Regulation] in the Eurozone.

Access to healthcare data is extremely difficult and guarded behind compliance walls. Usually, at best, access is provided to de-identified data with several security measures. Federated AI and the principles of data fluency can share a model without sharing the data used to train it and address these concerns. It will play a critical role in understanding the insights within distributed data silos while navigating with compliance barriers.

The privacy-preserving approach to unlocking the value of health data is crucial to the future of healthcare. The point is to improve healthcare machine learning adoption and understandability to drive actionable insights and better health outcomes. Federated AI goes beyond traditional enterprise data analytics to create a machine learning environment for data fluency and explainability that enables the training of models in parallel from automated multi-omics pipelines.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more from the original source:

What you need to know about data fluency and federated AI - Healthcare IT News

What is AI? Everything you need to know about Artificial …

What is artificial intelligence (AI)?

It depends who you ask.

Back in the 1950s, the fathers of the field,Minsky and McCarthy, described artificial intelligence as any task performed by a machine that would have previously been considered to require human intelligence.

That's obviously a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

Modern definitions of what it means to create intelligence are more specific. Francois Chollet, AI researcher at Google and creator of the machine-learning software library Keras, has said intelligence is tied to a system's ability to adapt and improvise in a new environment, to generalise its knowledge and apply it to unfamiliar scenarios.

"Intelligence is the efficiency with which you acquire new skills at tasks you didn't previously prepare for," he said.

"Intelligence is not skill itself, it's not what you can do, it's how well and how efficiently you can learn new things."

It's a definition under which modern AI-powered systems, such as virtual assistants, would be characterised as having demonstrated 'narrow AI'; the ability to generalise their training when carrying out a limited set of tasks, such as speech recognition or computer vision.

Typically, AI systems demonstrate at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

AI is ubiquitous today, used to recommend what you should buy next online, to understanding what you say to virtual assistants, such as Amazon's Alexa and Apple's Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.

At a very high level, artificial intelligence can be split into two broad types: narrow AI and general AI.

As mentioned above, narrow AI is what we see all around us in computers today: intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, or in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do defined tasks, which is why they are called narrow AI.

There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, coordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, generating a 3D model of the world from satellite imagery, the list goes on and on.

New applications of these learning systems are emerging all the time. Graphics card designer Nvidia recently revealed an AI-based system Maxine, which allows people to make good quality video calls, almost regardless of the speed of their internet connection. The system reduces the bandwidth needed for such calls by a factor of 10 by not transmitting the full video stream over the internet and instead animating a small number of static images of the caller, in a manner designed to reproduce the callers facial expressions and movements in real time and to be indistinguishable from the video.

However, as much untapped potential as these systems have, sometimes ambitions for the technology outstrips reality. A case in point are self-driving cars, which themselves are underpinned by AI-powered systems such as computer vision. Electric car company Tesla is lagging some way behind CEO Elon Musk's original timeline for the car's Autopilot system being upgraded to "full self-driving" from the system's more limited assisted-driving capabilities, with the Full Self-Driving option only recently rolled out to a select group of expert drivers as part of a beta testing program.

General AI is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or reasoning about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today and AI experts are fiercely divided over how soon it will become a reality.

SEE:How to implement AI and machine learning(free PDF)

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Mller and philosopher Nick Bostrom reported a 50% chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90% by 2075. The group went even further, predicting that so-called 'superintelligence' which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" was expected some 30 years after the achievement of AGI.

However, recent assessments by AI experts are more cautious. Pioneers in the field of modern AI research such as Geoffrey Hinton, Demis Hassabis and Yann LeCunsay society is nowhere near developing AGI. Given the skepticism of leading lights in the field of modern AI and the very different nature of modern narrow AI systems to AGI, there is perhaps little basis to fears that society will be disrupted by a general artificial intelligence in the near future.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain, and believe that AGI is still centuries away.

While modern narrow AI may be limited to performing specific tasks, within their specialisms these systems are sometimes capable of superhuman performance, in some instances even demonstrating superior creativity, a trait often held up as intrinsically human.

There have been too many breakthroughs to put together a definitive list, but some highlights include: in 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each, setting society on a path towards driverless vehicles.

IBM Watson competes on Jeopardy! in January 14, 2011

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.

In 2012, another breakthrough heralded AI's potential to tackle a multitude of new tasks previously thought of as too complex for any machine. That year, the AlexNet system decisively triumphed in the ImageNet Large Scale Visual Recognition Challenge. AlexNet's accuracy was such that it halved the error rate compared to rival systems in the image-recognition contest.

AlexNet's performance demonstrated the power of learning systems based on neural networks, a model for machine learning that had existed for decades but that was finally realising its potential due to refinements to architecture and leaps in parallel processing power made possible by Moore's Law. The prowess of machine-learning systems at carrying out computer vision also hit the headlines that year, with Google training a system to recognise an internet favorite: pictures of cats.

The next demonstration of the efficacy of machine-learning systems that caught the public's attention was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about possible 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself, and then learnt from the results. Google DeepMind CEO Demis Hassabis has also unveiled a new version of AlphaGo Zero that has mastered the games of chess and shogi.

And AI continues to sprint past new milestones:a system trained by OpenAI has defeated the world's top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented theirown language to cooperate and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

2020 was the year in which an AI system seemingly gained the ability to write and talk like a human, about almost any topic you could think of.

The system in question, known as Generative Pre-trained Transformer 3 or GPT-3 for short, is a neural network trained on billions of English language articles available on the open web.

From soon after it was made available for testing by the not-for-profit organisation OpenAI, the internet was abuzz with GPT-3's ability to generate articles on almost any topic that was fed to it, articles that at first glance were often hard to distinguish from those written by a human. Similarly impressive results followed in other areas, with its ability to convincingly answer questions on a broad range of topics and even pass for a novice JavaScript coder.

But while many GPT-3 generated articles had an air of verisimilitude, further testing found the sentences generated often didn't pass muster, offering up superficially plausible but confused statements, as well as sometimes outright nonsense.

There's still considerable interest in using the model's natural language understanding as the basis of future services and it is available to select developers to build into software via OpenAI's beta API. It will also be incorporated into future services available via Microsoft's Azure cloud platform.

Perhaps the most striking example of AI's potential came late in 2020, when the Google attention-based neural network AlphaFold 2 demonstrated a result some have called worthy of a Nobel Prize for Chemistry.

The system's ability to look at a protein's building blocks, known as amino acids, and derive that protein's 3D structure could have a profound impact on the rate at which diseases are understood and medicines are developed. In the Critical Assessment of protein Structure Prediction contest, AlphaFold 2 was able to determine the 3D structure of a protein with an accuracy rivaling crystallography, the gold standard for convincingly modelling proteins.

Unlike crystallography, which takes months to return results, AlphaFold 2 can model proteins in hours. With the 3D structure of proteins playing such an important role in human biology and disease, such a speed-up has been heralded as a landmark breakthrough for medical science, not to mention potential applications in other areas where enzymes are used in biotech.

Practically all of the achievements mentioned so far stemmed from machine learning, a subset of AI that accounts for the vast majority of achievements in the field in recent years. When people talk about AI today they are generally talking about machine learning.

Currently enjoying something of a resurgence, in simple terms machine learning is where a computer system learns how to perform a task, rather than being programmed how to do so. This description of machine learning dates all the way back to 1959, when it was coined by Arthur Samuel, a pioneer of the field who developed one of the world's first self-learning systems, the Samuel Checkers-playing Program.

To learn, these systems are fed huge amounts of data, which they then use to learn how to carry out a specific task, such as understanding speech or captioning a photograph. The quality and size of this dataset is important for building a system able to accurately carry out its designated task. For example, if you were building a machine-learning system to predict house prices, the training data should include more than just the property size, but other salient factors such as the number of bedrooms or the size of the garden.

Key to machine learning success are neural networks. These mathematical models are able to tweak internal parameters to change what they output. During training, a neural network is fed datasets that teach it what it should spit out when presented with certain data. In concrete terms, the network might be fed greyscale images of the numbers between zero and 9, alongside a string of binary digits zeroes and ones that indicate which number is shown in each greyscale image. The network would then be trained, adjusting its internal parameters, until it classifies the number shown in each image with a high degree of accuracy. This trained neural network could then be used to classify other greyscale images of numbers between zero and 9. Such a network was used in a seminal paper showing the application of neural networks published by Yann LeCun in 1989 and has been used by the US Postal Service to recognise handwritten zip codes.

The structure and functioning of neural networks is very loosely based on the connections between neurons in the brain. Neural networks are made up of of interconnected layers of algorithms, which feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to data as it passes between these layers. During training of these neural networks, the weights attached to data as it passes between layers will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have 'learned' how to carry out a particular task. The desired output could be anything from correctly labelling fruit in an image to predicting when an elevator might fail based on its sensor data.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a large number of sizeable layers that are trained using massive amounts of data. It is these deep neural networks that have fuelled the current leap forward in the ability of computers to carry out tasks like speech recognition and computer vision.

SEE:IT leader's guide to deep learning(Tech Pro Research)

There are various types of neural networks, with different strengths and weaknesses. Recurrent Neural Networks (RNN) are a type of neural net particularly well suited to Natural Language Processing (NLP) understanding the meaning of text and speech recognition, while convolutional neural networks have their roots in image recognition, and have uses as diverse as recommender systems and NLP. The design of neural networks is also evolving, with researchers refining a more effective form of deep neural network called long short-term memory or LSTM a type of RNN architecture used for tasks such as NLP and for stock market predictions allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The structure and training of deep neural networks.

Another area of AI research is evolutionary computation, which borrows from Darwin's theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally, there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behaviour of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

As outlined above, the biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power, during which time the use of clusters of graphics processing units (GPUs) to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google, Microsoft, and Tesla, have moved to using specialised chips tailored to both running, and more recently training, machine-learning models.

An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photos, as well as services that allow the public to build machine-learning models using Google's TensorFlow Research Cloud. The third generation of these chips was unveiled at Google's I/O conference in May 2018, and have since been packaged into machine-learning powerhouses called pods that can carry out more than one hundred thousand trillion floating-point operations per second (100 petaflops). These ongoing TPU upgrades have allowed Google to improve its services built on top of machine-learning models, for instance halving the time taken to train models used in Google Translate.

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using a very large number of labelled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labelled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word 'bass' relates to music or a fish. Once trained, the system can then apply these labels to new data, for example to a dog in a photo that's just been uploaded.

This process of teaching a machine by example is called supervised learning and the role of labelling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.

SEE:How artificial intelligence is taking call centers to the next level

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size Google's Open Images Dataset has about nine million images, while its labelled video repositoryYouTube-8M links to seven million labelled videos.ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people most of whom were recruited through Amazon Mechanical Turk who checked, sorted, and labelled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large amounts of compute power.

In recent years, Generative Adversarial Networks (GANs) have been used in machine-learning systems that only require a small amount of labelled data alongside a large amount of unlabelled data, which, as the name suggests, requires less manual work to prepare.

This approach could allow for the increased use of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn't set up in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick. In reinforcement learning, the system attempts to maximise a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind's Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on screen.

By also looking at the score achieved in each game, the system builds a model of which action will maximise the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

The approach is also used in robotics research, where reinforcement learning can help teach autonomous robots the optimal way to behave in real-world environments.

Many AI-related technologies are approaching, or have already reached, the 'peak of inflated expectations' in Gartner's Hype Cycle, with the backlash-driven 'trough of disillusionment' lying in wait.

With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaFold and AlphaGo systems that has probably made the biggest impact on the public awareness of AI.

All of the major cloud platforms Amazon Web Services, Microsoft Azure and Google Cloud Platform provide access to GPU arrays for training and running machine-learning models, with Google also gearing up to let users use its Tensor Processing Units custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google offering a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving. Amazon now offers a host of AWS offerings designed to streamline the process of training up machine-learning models and recently launched Amazon SageMaker Clarify, a tool to help organizations root out biases and imbalances in training data that could lead to skewed predictions by the trained model.

For those firms that don't want to build their own machine=learning models but instead want to consume AI-powered, on-demand services, such as voice, vision, and language recognition, Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella, and having invested $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Internally, each of the tech giants and others such as Facebook use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

The Amazon Echo Plus is a smart speaker with access to Amazon's Alexa virtual assistant built in.

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple's Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space Google Assistant with its ability to answer a wide range of queries and Amazon's Alexa with the massive number of 'Skills' that third-party devs have created to add to its capabilities.

Over time, these assistants are gaining abilities that make them more responsive and better able to handle the types of questions people ask in regular conversations. For example, Google Assistant now offers a feature called Continued Conversation, where a user can ask follow-up questions to their initial query, such as 'What's the weather like today?', followed by 'What about tomorrow?' and the system understands the follow-up question also relates to the weather.

These assistants and associated services can also handle far more than just speech, with the latest incarnation of the Google Lens able to translate text in images and allow you to search for clothes or furniture using photos.

SEE: How we learned to talk to computers, and how they learned to answer back (PDF download)

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with Amazon's Alexa now available for free on Windows 10 PCs, while Microsoftrevamped Cortana's role in the operating systemto focus more on productivity tasks, such as managing the user's schedule, rather than more consumer-focused features found in other assistants, such as playing music.

It'd be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country China is pursuing a three-step plan to turn AI into a core industry for the country,one that will be worth 150 billion yuan ($22bn) by the end of 2020, withthe aim of becoming the world's leading AI power by 2030.

Baidu has invested in developing self-driving cars, powered by its deep-learning algorithm, Baidu AutoBrain, and, following several years of tests, with its Apollo self-driving car havingracked up more than three million miles of driving in tests and carried over 100,000 passengers in 27 cities worldwide.

Baidu launched a fleet of 40 Apollo Go Robotaxis in Beijing this year and the company's founder has predicted that self-driving vehicles will be common in China's cities within five years.

Baidu's self-driving car, a modified BMW 3 series.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China's favor.

While you could buy a moderately powerful Nvidia GPU for your PC somewhere around the Nvidia GeForce RTX 2060 or faster and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on-demand.

The rest is here:

What is AI? Everything you need to know about Artificial ...

Decades-old ASCII adventure NetHack may hint at the future of AI – TechCrunch

Machine learning models have already mastered Chess, Go, Atari gamesand more, but in order for it to ascend to the next level, researchers at Facebook intend for AI to take on a different kind of game: the notoriously difficult and infinitely complex NetHack.

We wanted to construct what we think is the most accessible grand challenge with this game. It wont solve AI, but it will unlock pathways towards better AI, said Facebook AI Researchs Edward Grefenstette. Games are a good domain to find our assumptions about what makes machines intelligent and break them.

You may not be familiar with NetHack, but its one of the most influential games of all time. Youre an adventurer in a fantasy world, delving through the increasingly dangerous depths of a dungeon thats different every time. You must battle monsters, navigate traps and other hazards, and meanwhile stay on good terms with your god. Its the first roguelike (after Rogue, its immediate and much simpler predecessor) and arguably still the best almost certainly the hardest.

(Its free, by the way, and you can download and play it on nearly any platform.)

Its simple ASCII graphics, using a g for a goblin, an @ for the player, lines and dots for the levels architecture, and so on, belie its incredible complexity. Because Nethack, which made its debut in 1987, has been under active development ever since, with its shifting team of developers expanding its roster of objects and creatures, rules, and the countless, countless interactions between them all.

And this is part of what makes NetHack such a difficult and interesting challenge for AI: Its so open-ended. Not only is the world different every time, but every object and creature can interact in new ways, most of them hand-coded over decades to cover every possible player choice.

NetHack with a tile-based graphics update all the information is still available via text.

Atari, Dota 2, StarCraft 2 the solutions weve had to make progress there are very interesting. NetHack just presents different challenges. You have to rely on human knowledge to play the game as a human, said Grefenstette.

In these other games, theres a more or less obvious strategy to winning. Of course its more complex in a game like Dota 2 than in an Atari 800 game, but the idea is the same there are pieces the player controls, a game board of environment, and win conditions to pursue. Thats kind of the case in NetHack, but its weirder than that. For one thing, the game is different every time, and not just in the details.

New dungeon, new world, new monsters and items, you dont have a save point. If you make a mistake and die you dont get a second shot. Its a bit like real life, said Grefenstette. You have to learn from mistakes and come to new situations armed with that knowledge.

Drinking a corrosive potion is a bad idea, of course, but what about throwing it at a monster? Coating your weapon with it? Pouring it on the lock of a treasure chest? Diluting it with water? We have intuitive ideas about these actions, but a game-playing AI doesnt think the way we do.

The depth and complexity of the systems in NetHack are difficult to explain, but that diversity and difficulty make the game a perfect candidate for a competition, according to Grefenstette. You have to rely on human knowledge to play the game, he said.

People have been designing bots to play NetHack for many years that rely not on neural networks but decision trees as complex as the game itself. The team at Facebook Research hopes to engender a new approach by building a training environment that people can test machine learning-based game-playing algorithms on.

NetHack screens with labels showing what the AI is aware of.

The NetHack Learning Environment was actually put together last year, but the NetHack Challenge is only just now getting started. The NLE is basically a version of the game embedded in a dedicated computing environment that lets an AI interact with it through text commands (directions, actions like attack or quaff)

Its a tempting target for ambitious AI designers. While games like StarCraft 2 may enjoy a higher profile in some ways, NetHack is legendary and the idea of building a model on completely different lines from those used to dominate other games is an interesting challenge.

Its also, as Grefenstette explained, a more accessible one than many in the past. If you wanted to build an AI for StarCraft 2, you needed a lot of computing power available to run visual recognition engines on the imagery from the game. But in this case the entire game is transmitted via text, making it extremely efficient to work with. It can be played thousands of times faster than any human could with even the most basic computing setup. That leaves the challenge wide open to individuals and groups who dont have access to the kind of high-power setups necessary to power other machine learning methods.

We wanted to create a research environment that had a lot of challenges for the AI community, but not restrict it to only large academic labs, he said.

For the next few months, NLE will be available for people to test on, and competitors can basically build their bot or AI by whatever means they choose. But when the competition itself starts in earnest on October 15, theyll be limited to interacting with the game in its controlled environment through standard commands no special access, no inspecting RAM, etc.

The goal of the competition will be to complete the game, and the Facebook team will track how many times the agent ascends, as its called in NetHack, in a set amount of time. But were assuming this is going to be zero for everyone, Grefenstette admitted. After all, this is one of the hardest games ever made, and even humans who have played it for years have trouble winning even once in a lifetime, let alone several times in a row. There will be other scoring metrics to judge winners in a number of categories.

The hope is that this challenge provides the seed of a new approach to AI, one that more fundamentally resembles actual human thinking. Shortcuts, trial and error, score-hacking, and zerging wont work here the agent needs to learn systems of logic and apply them flexibly and intelligently, or die horribly at the hands of an enraged centaur or owlbear.

You can check out the rules and other specifics of the NetHack Challenge here. Results will be announced at the NeurIPS conference later this year.

See original here:

Decades-old ASCII adventure NetHack may hint at the future of AI - TechCrunch

AI startup investment is on pace for a record year – TechCrunch

The startup investing market is crowded, expensive and rapid-fire today as venture capitalists work to preempt one another, hoping to deploy funds into hot companies before their competitors. The AI startup market may be even hotter than the average technology niche.

This should not surprise.

In the wake of the Microsoft-Nuance deal, The Exchange reported that it would be reasonable to anticipate an even more active and competitive market for AI-powered startups. Our thesis was that after Redmond dropped nearly $20 billion for the AI company, investors would have a fresh incentive to invest in upstarts with an AI focus or strong AI component; exits, especially large transactions, have a way of spurring investor interest in related companies.

That expectation is coming true. Investors The Exchange reached out to in recent days reported a fierce market for AI startups.

The Exchange explores startups, markets and money.

Read it every morning on Extra Crunch or get The Exchange newsletter every Saturday.

But dont presume that investors are simply falling over one another to fund companies betting on a future that may or may not arrive. Per a Signal AI survey of 1,000 C-level executives, nearly 92% thought that companies should lean on AI to improve their decision-making processes. And 79% of respondents said that companies are already doing so.

The gap between the two numbers implies that there is space in the market for more corporations to learn to lean on AI-powered software solutions, while the first metric belies a huge total addressable market for startups constructing software built on a foundation of artificial intelligence.

Now deep in the second quarter, were diving back into the AI startup market this morning, leaning on notes from Blumberg Capitals David Blumberg, Glasswing Ventures Rudina Seseri, Atomicos Ben Blumeand Jocelyn Goldfein of Zetta Venture Partners. Well start by looking at recent venture capital data regarding AI startups and dig into what VCs are seeing in both the U.S. and European markets before chatting about applied AI versus core AI and in which context VCs might still care about the latter.

The exit market for AI startups is more than just the big Microsoft-Nuance deal. CB Insights reports that four of the largest five American tech companies have bought a dozen or more AI-focused startups to date, with Apple leading the pack with 29 such transactions.

See the original post here:

AI startup investment is on pace for a record year - TechCrunch

Is the power sector seeing the beginnings of an AI investment boom? – Power Technology

The power industry is seeing an increase in artificial intelligence (AI) investment across several key metrics, according to an analysis of GlobalData data.

AI is gaining an increasing presence across multiple sectors, with top companies completing more AI deals, hiring for more AI roles and mentioning it more frequently in company reports at the start of 2021.

GlobalDatas thematic approach to sector activity seeks to group key company information on hiring, deals, patents and more by topic to see which companies are best placed to weather the disruptions coming to their industries.

These themes, of which AI is one, are best thought of as any issue that keeps a CEO awake at night, and by tracking them it becomes possible to ascertain which companies are leading the way on specific issues and which are dragging their heels.

One area in which there has been some decrease in AI investment among power companies is in the number of deals. GlobalData show that there were 13 AI deals in power in the first quarter of 2019. By the first quarter of 2021, that number was six.

Hiring patterns within the power sector as a whole are pointing towards an increase in the level of attention being shown to AI-related roles. There was a monthly average of 1,008 actively advertised-for open AI roles within the industry in April this year, up from a monthly average of 669 in December 2020.

It is also apparent from an analysis of keyword mentions in financial filings that AI is occupying the minds of power companies to a lesser extent.

There have been 164 mentions of AI across the filings of the biggest power companies so far in 2021, equating to 7.3% of all tech theme mentions. This figure represents a decrease compared to 2016, when AI represented 12.9% of the tech theme mentions in company filings.

AI is increasingly fueling innovation in the power sector, particularly in the past six years. There were, on average, 34 power patents related to AI granted each year from 2000 to 2014. That figure has risen to an average of 188 patents since then, reaching 230 in 2020.

Moisture Meters and Humidity Sensors for Measuring Water Vapour in Power Plants

Fabric Expansion Joints, Metal Expansion Joints and Elastomer Expansion Joints

Moisture Meters and Humidity Sensors for Measuring Water Vapour in Power Plants

28 Aug 2020

Fabric Expansion Joints, Metal Expansion Joints and Elastomer Expansion Joints

28 Aug 2020

More:

Is the power sector seeing the beginnings of an AI investment boom? - Power Technology