Understanding our place in the universe | MIT News | Massachusetts … – MIT News

Brian Nord first fell in love with physics when he was a teenager growing up in Wisconsin. His high school physics program wasnt exceptional, and he sometimes struggled to keep up with class material, but those difficulties did nothing to dampen his interest in the subject. In addition to the main curriculum, students were encouraged to independently study topics they found interesting, and Nord quickly developed a fascination with the cosmos. A touchstone that I often come back to is space, he says. The mystery of traveling in it and seeing whats at the edge.

Nord was an avid reader of comic books, and astrophysics appealed to his desire to become a part of something bigger. There always seemed to be something special about having this kinship with the universe around you, he recalls. I always thought it would be cool if I could have that deep connection to physics.

Nord began to cultivate that connection as an undergraduate at The Johns Hopkins University. After graduating with a BA in physics, he went on to study at the University of Michigan, where he earned an MS and PhD in the same field. By this point, he was already thinking big, but he wanted to think even bigger. This desire for a more comprehensive understanding of the universe led him away from astrophysics and toward the more expansive field of cosmology. Cosmology deals with the whole kit and caboodle, the whole shebang, he explains. Our biggest questions are about the origin and the fate of the universe.

Dark mysteries

Nord was particularly interested in parts of the universe that cant be observed through traditional means. Evidence suggests that dark matter makes up the majority of mass in the universe and provides most of its gravity, but its nature largely remains in the realm of hypothesis and speculation. It doesnt absorb, reflect, or emit any type of electromagnetic radiation, which makes it nearly impossible for scientists to detect. But while dark matter provides gravity to pull the universe together, an equally mysterious force dark energy is pulling it apart. We know even less about dark energy than we do about dark matter, Nord explains.

For the past 15 years, Nord has been attempting to close that gap in our knowledge. Part of his work focuses on the statistical modeling of galaxy clusters and their ability to distort and magnify light as it travels through the cosmos. This effect, which is known as strong gravitational lensing, is a useful tool for detecting the influence of dark matter on gravity and for measuring how dark energy affects the expansion rate of the universe.

After earning his PhD, Nord remained at the University of Michigan to continue his research as part of a postdoctoral fellowship. He currently holds a position at the Fermi National Accelerator Laboratory and is a senior member of the Kavli Institute for Cosmological Physics at the University of Chicago. He continues to investigate questions about the origin and destiny of the universe, but his more recent work has also focused on improving the ways in which we make scientific discoveries.

AI powerup

When it comes to addressing big questions about the nature of the cosmos, Nord has consistently run into one major problem: although his mastery of physics can sometimes make him feel like a superhero, hes only human, and humans arent perfect. They make mistakes, adapt slowly to new information, and take a long time to get things done.

The solution, Nord argues, is to go beyond the human, into the realm of algorithms and models. As part of Fermilabs Artificial Intelligence Project, he spends his days teaching machines how to analyze cosmological data, a task for which they are better suited than most human scientists. Artificial intelligence can give us models that are more flexible than what we can create ourselves with pen and paper, Nord explains. In a lot of cases, it does better than humans do.

Nord is continuing this research at MIT as part of the Martin Luther King Jr. (MLK) Visiting Scholars and Professors Program. Earlier this year, he joined the Laboratory for Nuclear Science (LNS), with Jesse Thaler in the Department of Physics and Center for Theoretical Physics (CTP) as his faculty host. Thaler is the director of the National Science Foundations Institute for Artificial Intelligence and Fundamental Interactions (IAIFI). Since arriving on campus, Nord has focused his efforts on exploring the potential of AI to design new scientific experiments and instruments. These processes ordinarily take an enormous amount of time, he explains, but AI could rapidly accelerate them. Could we design the next particle collider or the next telescope in less than five years, instead of 30? he wonders.

But if Nord has learned anything from the comics of his youth, it is that with great power comes great responsibility. AI is an incredible scientific asset, but it can also be used for more nefarious purposes. The same computer algorithms that could build the next particle collider also underlie things like facial recognition software and the risk assessment tools that inform sentencing decisions in criminal court. Many of these algorithms are deeply biased against people of color. Its a double-edged sword, Nord explains. Because if [AI] works better for science, it works better for facial recognition. So, Im working against myself.

Culture change superpowers

In recent years, Nord has attempted to develop methods to make the application of AI more ethical, and his work has focused on the broad intersections between ethics, justice, and scientific discovery. His efforts to combat racism in STEM have established him as a leader in the movement to address inequities and oppression in academic and research environments. In June of 2020, he collaborated with members of Particles for Justice a group that boasts MIT professors Daniel Harlow and Tracy Slatyer, as well as former MLK Visiting Scholar and CTP researcher Chanda Prescod-Weinstein to create the academic Strike for Black Lives. The strike, which emerged as a response to the police killings of George Floyd, Breonna Taylor, and many others, called on the academic community to take a stand against anti-Black racism.

Nord is also the co-author of Black Light, a curriculum for learning about Black experiences, and the co-founder of Change Now, which produced a list of calls for action to make a more just laboratory environment at Fermilab. As the co-founder of Deep Skies, he also strives to foster justice-oriented research communities free of traditional hierarchies and oppressive power structures. The basic idea is just humanity over productivity, he explains.

This work has led Nord to reconsider what motivated him to pursue a career in physics in the first place. When he first discovered his passion for the subject as a teenager, he knew he wanted to use physics to help people, but he wasnt sure how. I was thinking Id make some technology that will save lives, and I still hope to do that, he says. But I think maybe more of my direct impact, at least in this stage of my career, is in trying to change the culture.

Physics may not have granted Nord flight or X-ray vision not yet, at least. But over the course of his long career, he has discovered a more substantial power. If I can understand the universe, he says, maybe that will help me understand myself and my place in the world and our place as humanity.

See the original post here:

Understanding our place in the universe | MIT News | Massachusetts ... - MIT News

CU to showcase 75 years of innovation and impact at the 38th … – University of Colorado Boulder

Representatives from across the university, including LASP, CU Boulders AeroSpace Ventures initiative, and the College of Engineering and Applied Science at the University of Colorado Colorado Springs (UCCS), are jointly hosting an exhibit booth at this years Space Symposium. Together, they will highlight CUs prominence in aerospace engineering and climate and space-weather research, as well as the crucial role the university plays in developing Colorados aerospace workforce.

This coordinated presence at the Space Symposium is a great example of how leaders and units from across the university system collaborate to highlight CUs extensive aerospace expertise and the many resources our campuses have to offer, said Chris Muldrow, the Smead Director and Department Chief of Staff in the Ann and H.J. Smead Department of Aerospace Engineering Sciences. We are a world leader in aerospace, and our impact is multiplied when we bring the best of CU aerospace to the space ecosystem.

The College of Engineering and Applied Science at UCCS produces highly qualified undergraduate and graduate students with solid technical backgrounds plus crucial experiential learning. The colleges new Bachelor of Science degree in Aerospace Engineering, which launched this fall with a full cohort of students, will leverage the existing expertise in the Department of Mechanical and Aerospace Engineering. This summer, to stand up the program, the college broke ground ona new facility, the Anschutz Engineering Center,slated to open for classes in January 2024, says Sue McClernan, the colleges career and industry outreach program director. Student and workforce demand prompted the new degree, and we believe the Bachelor of Science in Aerospace Engineering at UCCS is well suited for a thriving aerospace economy in Colorado Springs and the region.

AeroSpace Ventures brings together researchers, students, industry leaders, government partners, and entrepreneurs to envision and create the future for space and Earth systems. This initiative is helping to drive the discovery and innovation that will shape the 21stcenturysaerospaceeconomy, says George Hatcher III, executive director of industry and foundation relations at CU Boulder. AeroSpace Ventures brings together CU Boulder departments, institutes, centers, and programsacross the universityto amplify the more than $120 million in aerospace-related research that happens on campus each year.

CU Boulder also hosts the Space Weather Technology, Research and Education Center to serve as a catalyst site for space weather research and technology development among CU and other Front Range space research and technology organizations. The campus also hosts the Center for National Security Initiatives. It provides high-impact national security research to government and industry partners and addresses the ever-increasing demand for qualified and experienced aerospace and defense professionals in Colorado and across the nation.

From LASPs humble beginnings 75 years ago, the University of Colorado has developed into an epicenter of space research and aerospace workforce development crucial to Coloradoand our nation.

Founded a decade before NASA, the Laboratory for Atmospheric and Space Physics at the University of Colorado Boulder is on a mission to transform human understanding of the cosmos by pioneering new technologies and approaches to space science. LASP is the only academic research institute in the world to have sent instruments to every planet in our solar system. LASP began celebrating its 75th anniversary in April 2023.

View original post here:

CU to showcase 75 years of innovation and impact at the 38th ... - University of Colorado Boulder

Could a Rogue Planet Destroy the Earth? – Newsweek

The vast universe is filled with strange and mysterious phenomena, from quasars and black holes to the Botes void. One bizarre element in space is rogue planets, worlds just like our own, untethered by a star, wandering free and alone through the abyss.

Could one of these lonely planets find its way to our own solar system or even collide with the Earth?

Rouge planets, also known as free-floating planets, are thought to be a result of gravitational interactions in the early days of the formation of solar systems. Or they could be a result of the failed formation of stars.

"Modern theories of planetary system formation suggest that many planets are formed around young stars when they are in the short-lived phase of growing their planetary systems. But many of these are ejected due to gravitational scattering as the planetary systems organize themselves over time," Michael Zemcov, an associate professor of physics at the Rochester Institute of Technology, told Newsweek.

As a solar system forms, numerous chunks of rock of varying sizes and speeds whirl around each other in chaotic orbits. As these bodies soar past each other, they alter the orbits of other bodies as a result of their gravity.

"In typical three-body interactions characteristic of these ejection events, it is usually the lowest-mass object that gets ejected," Zemcov said. "So I think a generic prediction of these 'clearing out' episodes during planetary system formation is that the heavier objectswhether rocky or, more likely, ice or gas giantssurvive and the smaller ones don't."

Rogue planets may also come from another source, which is a star that failed to ignite and instead became stuck as a lone gas giant.

"They might form out of gas clouds in space, in the same way stars do, or they may have formed in a disc around a star and then been ejected due to an encounter with another star or an interaction with another planet in the same system," Richard Parker, a lecturer in astrophysics at the University of Sheffield in the U.K., told Newsweek.

"In the former case, they are likely to be predominantly gas giants like Jupiter. In the latter case, they could be rocky like Earth," he said.

Scientists aren't sure how many rogue planets are in our Milky Way galaxy since they are extremely hard to observe.

"[There are] likely many billions, or more, but they are ferociously hard to see," Zemcov said. "They would emit very little light on their own, mostly at very long wavelengths that are extremely difficult to pick out of the background emission. As a result, our primary way of detecting them is via gravitational microlensing, where we monitor a field of stars and then look for the light of a background source being temporarily magnified by the mass of a rogue planet as it passes precisely between our telescopes and the background star.

He continued: "We have found many objects this way, but without other information the lensing objects are impossible to weigh. So we don't have a good idea of demographics except in the general sense that larger things should be easier to see just because their temporary magnification is brighter and longer."

While we don't have a true idea of the number of rogue planets, scientists expect it is large.

"We expect a really big population," Alberto Fairn, a planetary scientist and astrobiologist at Cornell University, told Newsweek. "Think this way: The smallest the object in our galaxy, the larger the number of them we expect."

According to Dorian Abbot, a geophysical sciences professor at the University of Chicago, it is likely that most rogue planets are terrestrial because there are probably more terrestrial planets in general.

"It's easier to throw them out through an interaction with a gas giant because they are less massive. But gas giants can be ejected too. The Hot Jupiters detected in [around] 1 percent of systems suggest major dynamical evolution of those systems since Jupiters have to form where it is cold. This dynamical evolution could be associated with generating rogue planets," Abbot told Newsweek.

With all these invisible planets zipping around the galaxy, could one enter our solar system or even collide with the Earth?

"Assuming that there is a rogue planet for every star in the Milky Way, and we assume the solar system will be in a similar region of the galaxy over its lifetime, then I would estimate that the likelihood of a rogue planet coming within the solar system over the next 1,000 years to be a 1 in a billion chance," Garrett Brown, a celestial mechanics and computational physics researcher at the University of Toronto, told Newsweek.

"Here, I define 'coming within the solar system' to mean that we could see the rogue planet in such a way that when we look at it with a telescope it would look like Neptune or Pluto," Brown said. "For a rogue planet that were to come at least this close, there would be a 1 in 2,000 chance that it would directly alter Earth's orbit."

He continued: "It's difficult to say how likely it would be to actually collide with Earth without a more detailed analysis, but it would be much, much less likely. Thus, I would estimate the likelihood of a rogue planet coming closer to the Earth than Mars or Venus to be 1 in 2 trillion in the next 1,000 years. If there is one heading our way within the next 1,000 years, it would currently be about 0.2 light-years away."

Even if a rogue planet came close to the Earth, the interaction may not even destroy the planet if there wasn't a direct hit.

"It would need to come close enough to Earth to either collide with it or, a bit less unlikely, alter its orbit. If it does collide, this would be at high speed and likely destroy Earth, if it is comparable in mass and density to Earth," Jacco van Loon, an astrophysicist at Keele University, told Newsweek.

"A planet like Jupiter might even swallow Earth. Or Earth might come out the other way if it is a grazing encounter, but probably without its atmosphere," he said.

Rather than destroying the Earth, a passing rogue planet could even bump our planet out of orbit and cause it to become a rogue planet itself.

"I would say the more scary thing, rather than a direct collision, is having the Earth be scattered by a brief encounter by, say, an exo-Neptune passing through, which would move us to a different orbit or perhaps eject us from the solar system altogether," Zemcov said. "Then we would likely all freeze, or possibly cook, in a matter of weeks. That said, I am not losing any sleep over such a possibility."

It's very unlikely that interactions of the planets already in our solar system could suddenly boot Earth out into the abyss, thanks to our planet's orbits having had billions of years to settle into an equilibrium.

"One open and extremely good question is why our own solar system has been stable over 4.5 billion years," Zemcov said. "In many ways, it shouldn't be. As an example, some models for planet formation suggest that Jupiter was formed much closer in and then somehow migrated out to where it is today, likely by exchanging momentum with something that got ejected from our solar system."

He continued: "How we might retain the four rocky planets in the inner solar system in such a scenario is a complete mystery. And then we look around our solar system and see evidence for massive disruptionsfor example, Uranus rotating on its side. And it's clear that over astronomical time scales the details of these [solar systems] are not terribly robust."

One possibility is that there were once more planets in our early solar system but one was ejected as a rogue planet, leaving the solar system to never return.

"What's possible is that our sun would have ejected a rogue billions of years ago, when Jupiter and Saturn traveled from their original inner orbits to their actual positions. That's a scenario we cannot discard but we cannot confirm either," Fairn said.

Could a planet be ejected after life has evolved on its surface, or could life evolve after the planet left its star?

"Another much more interesting, to me, possible feature of rogue planets [is] the possibility that they can host life," Lorenzo Iorio, an astronomy and astrophysics professor at the Italian Ministry of Education, Universities and Research, told Newsweek.

Even without a star, life could be sustained under certain conditions. According to the Planetary Society, if a rogue planet had a large moon that orbited at close quarters, it could keep the center of the planet hot enough so that life could exist in volcanic vent environments.

So, while a rogue planet's collision would likely spell the end of life on Earth, such planets may be capable of hosting their own unique ecosystems.

Do you have a tip on a science story that Newsweek should be covering? Do you have a question about rogue planets? Let us know via science@newsweek.com.

Read the original here:

Could a Rogue Planet Destroy the Earth? - Newsweek

Does the sun really belong in its family? Astronomers get to the … – Space.com

The sun is having an identity crisis: Because it shows different magnetic activity and rotation rates than other stars in its current classification, scientists have debated whether the sun is really like the other stars in its family.

Now, the debate may finally be settled, as an investigation has found that the sun does indeed belong in this group.

The research was led by ngela Santos, a scientist at the Institute of Astrophysics and Space Sciences in Portugal, whose work focuses primarily on how solar and stellar rotation and stellar magnetic activity change as stars evolve. She explained the controversy over the sun's classification.

Related: Our sun is a weirdly 'quiet' star and that's lucky for all of us

"In the community, there is an ongoing debate on whether the sun is a 'sun-like' star," Santos said in a statement (opens in new tab). "In particular, about its magnetic activity; several studies suggested that stars similar to the sun were significantly more active. However, the problem doesn't seem to be with the sun, but with the stars classified as sun-like, because there are several limitations and biases in the observational data and the inferred stellar properties."

To investigate the question of whether the sun is truly a sun-like star, Santos and the team turned to data from NASA's now-retired Kepler space telescope, the European Space Agency's (ESA) Gaia mission, and the NASA-ESA Solar and Heliospheric Observatory (SOHO).

They focused on multiple stars that have similar stellar properties and magnetic activity to the sun and compared the data with observations of the sun's last two 11-year solar cycles collected by SOHO, which launched in 1995.

One star featured in the data, which is nicknamed "Doris" and officially designated KIC 8006161, is a blue star of a similar size and mass to the sun. The researchers had previously noted that the amplitude of Doris' stellar cycle was twice that of the last two solar cycles, indicating that Doris became twice as strong as our star, even though the two stars were similar in many ways.

The difference was caused by a disparity in the proportions of elements heavier than hydrogen and helium that make up the two stars' compositions. Astronomers call elements heavier than helium "metals," and they refer to the proportion of these elements as the "metallicity" of the star. Doris has a higher metallicity than the sun, and the researchers linked this difference to stronger activity.

"The difference was the metallicity," Santos said. "Our interpretation is that the effect of metallicity, which leads to a deeper convection zone, produces a more effective dynamo, which leads to a stronger activity cycle."

The researchers then went back and disregarded metallicity to select stars from their catalogs that demonstrated similar behavior to Doris. They found that most of these stars also had high metallicities, though Doris was still the most active of these stars.

"In our selection, the only parameter that could lead to this excess is the rotation period," Santos said. "In particular, Doris had a longer period than the sun. And, in fact, we found evidence of a correlation between the rotation period and metallicity."

In addition, despite being younger than the sun, Doris rotates more slowly. This is typical; astrophysicists think all stars are born spinning and slow down, or "spin down," as they age. This slowdown happens because of a phenomenon called "magnetic braking," in which material is caught by the star's magnetic field and eventually flung into space, carrying some of the sun's angular (rotational) momentum with it.

Doris' stronger magnetic activity is causing more magnetic braking, leading it to spin more slowly than the sun, the researchers explained.

Despite some key differences, however, the sun fits in nicely with a family of stellar objects aptly called sun-like stars, the team concluded.

"What we found is that although there are stars which are more active than the sun, the sun is indeed a completely normal sun-like star," Santos said.

The research was published in the April edition of the journal Astronomy & Astrophysics (opens in new tab).

Follow us on Twitter @Spacedotcom (opens in new tab) or on Facebook (opens in new tab).

See the original post:

Does the sun really belong in its family? Astronomers get to the ... - Space.com

Kenya’s Third Attempt to Launch First 3U Observation Satellite Delayed – Voice of America – VOA News

Taifa 1, Kenya's first operational 3U nanosatellite, was set to launch aboard the SpaceX Falcon 9 rocket from the Vandenberg Space Force Base in the U.S. state of California on Friday after being delayed twice. But the launch was scrubbed at the last minute because of unfavorable weather.

Teddy Warria, with Africa's Talking Limited, a high-tech company, traveled to the University of Nairobi in Kenya from Kisumu, 563 kilometers west of Nairobi. He said he'll stay as long it takes to witness the historic day.

"It shows us through science, technology, engineering and mathematics, and if we apply the lessons learned from STEM, we can go as far as our minds and imagination can take us," Warria said.

Regardless of the delay, Charles Mwangi, the acting director of space sector and technology development at the Kenya Space Agency, said the satellite is quite significant.

"... [I]t's initiating conversations we've not been having in terms of what our role within the space sector should be," Mwangi said. "How do we leverage the potential space to address our societal need. More importantly, how do we catalyze research and activities of developing systems within our region."

FILE - Delegates attend the preparation of the launch of Kenya's first operational 3U Earth observation satellite, the Taifa-1, at the University of Nairobi's Taifa Hall, in Nairobi, Kenya, April 11, 2023.

Mwangi told VOA that launching the satellite will have some major benefits "that will help us in monitoring our forests, doing crop prediction, determine where the yield for our crops, disaster management, planning."

The satellite was developed by nine Kenyan engineers and cost $385,000 to build. The engineers collaborated with Bulgarian aerospace manufacturer Endurosat AD for testing and parts.

Pattern Odhiambo, an electrical and electronics engineer at the Kenyan Space Agency, who worked on the Taifa 1 mission, said, "I took part in deciding what kind of a camera we are supposed to have on this mission, so that we can meet the mission's objectives, which is to take images over the Kenyan territory for agricultural use, for urban planning, monitoring of natural resources and the likes."

And, as the communication subsystem lead, he also had other tasks.

"I took part in the design of the radio frequency link between the satellite and the ground station, the decision-making process on the kind of modulation schemes you can have on the satellite, the kind of transmitter power, the kind of antenna you are supposed to have," he said.

Samuel Nyangi, a University of Nairobi graduate in astronomy and Astro physics, was also at the university to witness his country's history making.

"If you look at the African countries that are economically strong Nigeria, South Africa, Egypt they all have very strong space industries. We are so proud of the Kenya Space Agency, having taken this initiative, because the satellite data that we use [is] from foreign nations, specifically NASA in the United States. For us having our own data, tailoring it to our own needs as Kenyans, it's a very big step," Nyangi said.

This sentiment is echoed by Paul Baki, professor of Physics at the Technical University of Kenya, who participated in a panel discussion on education and research to help answer students' questions. Baki told VOA this is a big leap for Kenya.

"We have walked this journey, I think, for over 20 years when the first draft space policy was done in 1994," Baki said. "We've decided that we are going to walk the talk and build something domestically. It has happened in approximately three years, which to me is no mean feat, and this is quite inspiring to our students because they have something to look up to."

Student James Achesa, who is in his fourth year studying mechanical engineering at Nairobi University, explained his understanding of the Taifa 1 mission.

"It'll help the small-scale farmer, as well as just general people in Kenya to see and understand where our country is going to. So, they might not enjoy the science of putting a spacecraft into space, but the science that does will come and disseminate to them at grassroots levels and will help them plan for their future," Achesa said.

Ivy Kut, who has a bachelor's degree in applied sciences and geoinformatics from the Technical University of Kenya, said, "It's going to benefit Kenyans in that we are going to get our own satellite data with better resolution and that is going to inform a lot of decisions in all sectors, especially in the analysis of earth data."

The next launch attempt is scheduled for Saturday.

See the original post:

Kenya's Third Attempt to Launch First 3U Observation Satellite Delayed - Voice of America - VOA News

TOI-733 b – A Planet In The Small-planet Radius Valley Orbiting A … – Astrobiology News

RV (top panel) and FWHM (bottom panel) time-series. The purple markers in each panel represent the HARPS RV and FWHM measurements with inferred offsets extracted. The inferred multi-GP model is shown as a solid black curve, where the dark and light shaded areas show the 1- and 2- sigma credible intervals from said model, and can also explain the data but with a correspondingly lower probability. The solid red line in the top panel shows the star-only model, while the teal sine curve the Keplerian for TOI-733 b. In both panels the nominal error bars are in solid purple, and the jitter error bars (HARPS) are semi-transparent purple. astro-ph.EP

We report the discovery of a hot (Teq 1055 K) planet in the small planet radius valley transiting the Sun-like star TOI-733, as part of the KESPRINT follow-up program of TESS planets carried out with the HARPS spectrograph. TESS photometry from sectors 9 and 36 yields an orbital period of Porb = 4.884765+1.9e52.4e5 days and a radius of Rp = 1.992+0.0850.090 R.

Multi-dimensional Gaussian process modelling of the radial velocity measurements from HARPS and activity indicators, gives a semi-amplitude of K = 2.230.26 m s1, translating into a planet mass of Mp = 5.72+0.700.68 M. These parameters imply that the planet is of moderate density (p = 3.98+0.770.66 g cm3) and place it in the transition region between rocky and volatile-rich planets with H/He-dominated envelopes on the mass-radius diagram.

Combining these with stellar parameters and abundances, we calculate planet interior and atmosphere models, which in turn suggest that TOI-733 b has a volatile-enriched, most likely secondary outer envelope, and may represent a highly irradiated ocean world one of only a few such planets around G-type stars that are well-characterised.

Iskra Y. Georgieva, Carina M. Persson, Elisa Goffo, Lorena Acua, Artyom Aguichine, Luisa M. Serrano, Kristine W. F. Lam, Davide Gandolfi, Karen A. Collins, Steven B. Howell, Fei Dai, Malcolm Fridlund, Judith Korth, Magali Deleuil, Oscar Barragn, William D. Cochran, Szilrd Csizmadia, Hans J. Deeg, Eike Guenther, Artie P. Hatzes, Jon M. Jenkins, John Livingston, Rafael Luque, Olivier Mousis, Hannah L. M. Osborne, Enric Palle, Seth Redfield, Vincent Van Eylen, Joseph D. Twicken, Joshua N. Winn, Ahlam Alqasim, Kevin I. Collins, Crystal L. Gnilka, David W. Latham, Hannah M. Lewis, Howard M. Relles, George R. Ricker, Pamela Rowden, Sara Seager, Avi Shporer, Thiam-Guan Tan, Andrew Vanderburg, Roland Vanderspek

Comments: Accepted for publication in A&ASubjects: Earth and Planetary Astrophysics (astro-ph.EP)Cite as: arXiv:2304.06655 [astro-ph.EP] (or arXiv:2304.06655v1 [astro-ph.EP] for this version)Submission historyFrom: Iskra Georgieva[v1] Thu, 13 Apr 2023 16:35:36 UTC (3,171 KB)https://arxiv.org/abs/2304.06655Astrobiology

Read the rest here:

TOI-733 b - A Planet In The Small-planet Radius Valley Orbiting A ... - Astrobiology News

NASAs TESS celebrates fifth year scanning th – EurekAlert

Now in its fifth year in space, NASAs TESS (Transiting Exoplanet Survey Satellite) remains a rousing success. TESSs cameras have mapped more than 93% of the entire sky, discovered 329 new worlds and thousands more candidates, and provided new insights into a wide array of cosmic phenomena, from stellar pulsations and exploding stars to supermassive black holes.

Using its four cameras, TESS monitors large swaths of the sky called sectors for about a month at a time. Each sector measures 24 by 96 degrees, about as wide as a persons hand at arms length and stretching from the horizon to the zenith. The cameras capture a total of 192 million pixels in each full-frame image. During its primary mission, TESS captured one of these images every 30 minutes, but this torrent of data has increased with time. The cameras now record each sector every 200 seconds.

The volume of high-quality TESS data now available is quite impressive, said Knicole Coln, the missions project scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. We have more than 251 terabytes just for one of the main data products, called full-frame images. Thats the equivalent of streaming 167,000 movies in full HD.

TESS extracts parts of each full-frame image to make cutouts around specific cosmic objects more than 467,000 of them at the moment and together they create a detailed record of changing brightness for each one, said Christina Hedges, lead for the TESS General Investigator Office and a research scientist at both the University of Maryland, Baltimore County and Goddard. We use these files to produce light curves, a product that graphically shows how a sources brightness alters over time.

To find exoplanets, or worlds beyond our solar system, TESS looks for the telltale dimming of a star caused when an orbiting planet passes in front of it. But stars also change brightness for other reasons: exploding as supernovae, erupting in sudden flares, dark star spots on their rotating surfaces, and even slight changes due to oscillations driven by internal sound waves. The rapid, regular observations from TESS enable more detailed study of these phenomena.

Some stars give TESS a trifecta of brightness-changing behavior. One example is AU Microscopii, thought to be about 25 million years old a rowdy youngster less than 1% the age of our Sun. Spotted regions on AU Mics surface grow and shrink, and the stars rotation carries them into and out of sight. The stormy star also erupts with frequent flares. With all this going on, TESS, with the help of NASAs now-retired Spitzer Space Telescope, discovered a planet about four times Earths size orbiting the star every 8.5 days. Then, in 2022, scientists announced that TESS data revealed the presence of another, smaller world, one almost three times Earths size and orbiting every 18.9 days. These discoveries have made the system a touchstone for understanding how stars and planets form and evolve.

Here are a few more of the missions greatest hits:

New discoveries are waiting to be made within the huge volume of data TESS has already captured. This is a library of observations astronomers will explore for years, but theres much more to come.

Were celebrating TESSs fifth anniversary at work and wishing it many happy returns! Coln said.

TESS is a NASA Astrophysics Explorer mission led and operated by MIT in Cambridge, Massachusetts, and managed by NASA's Goddard Space Flight Center. Additional partners include Northrop Grumman, based in Falls Church, Virginia; NASAs Ames Research Center in Californias Silicon Valley; the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts; MITs Lincoln Laboratory; and the Space Telescope Science Institute in Baltimore. More than a dozen universities, research institutes, and observatories worldwide are participants in the mission.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read more here:

NASAs TESS celebrates fifth year scanning th - EurekAlert

Unraveling new insights on cosmic explosions – Asiana Times

A tremendous pulse of gamma-ray radiation that swept through our solar system on October 9, 2022, overwhelmed the gamma-ray detectors on multiple orbiting satellites and sent astronomers on the run to investigate the event using the most potent telescopes in the world.

The newly discovered source, designated GRB 221009A in honor of the time it was found, ended up becoming the brightest gamma-ray burst (GRB) ever observed.The gamma-ray burst, which lasted for more than 300 seconds, is thought to be the first sign of the birth of a black hole, which is created when the center of a large, rapidly spinning star shatters under the weight of itself.At nearly light speed, powerful plasma jets are ejected from the growing black hole, penetrating the falling star and emitting gamma rays.

Observations of GRB 221009A from radio waves to gamma rays, including crucial millimeter-wave observations with the Centre for Astrophysics | Harvard & Smithsonians Submillimeter Array (SMA) in Hawaii, shed new light on the decades-long quest to understand the origin of these extreme cosmic explosions, according to a new study that appears today in the Astrophysical Journal Letters.

What would happen after the initial burst of gamma rays was the mystery of GRB 221009A, the brightest explosion ever seen. The studys principal author is Tanmoy Laskar, an assistant professor of physics and astronomy at the University of Utah. According to him, a dazzling afterglow of light spanning the entire spectrum is produced as the jets collide with the gas surrounding the dying star. We must be quick and nimble to capture the light before it vanishes and take its secrets because the afterglow fades because the afterglow fades pretty quickly, he added.

In an effort to employ the greatest radio and millimetre telescopes in the world to analyse the afterglow of GRB 221009A, astronomers Edo Berger and Yvette Cendes of the Centre for Astrophysics (CfA) immediately obtained data with the SMA.

Garrett Keating, a SMA project scientist and CfA researcher, states they were able to swiftly turn the SMA to the site of GRB 221009A due to its capacity to respond quickly. The crew was impressed by the brightness of the GRBs afterglow, which we could observe for more than 10 days before it faded.

Astronomers were perplexed when they combined and analyzed data from the SMA and other telescopes around the world and discovered that the millimeter and radio wave measurements were significantly brighter compared to what the visible and X-ray radiation would suggest.

According to Cendes, CfA research associate, one explanation is that the potent jet created by GRB 221009A is more complicated than in other GRBs. Its likely that one part of the jet produces visible light and X-rays while another part generates radio waves and early millimeter waves.

According to researchers, this afterglow is so intense that we will keep looking into its radio emission for months, if not years. With this much longer time span, they hope to solve the riddle of the early excess emissions mysterious origin.

Unrelated to the specifics of this GRB, astronomers now have a crucial new skill: the capacity to react quickly to GRBs and other comparable phenomena with millimeter-wave telescopes.

According to Edo Berger, professor of astronomy at Harvard University and the CfA, the key lesson from this GRB is that without fast-acting radio and millimeter telescopes, such as the SMA, We would not be able to learn more about the most intense explosions in the cosmos. If we want to benefit from these gifts from the cosmos, we have to be as responsive as we can because we never know when such events will occur.

Read more on : https://www.nasa.gov/feature/goddard/2023/nasa-missions-study-what-may-be-a-1-in-10000-year-gamma-ray-burst/

See the original post:

Unraveling new insights on cosmic explosions - Asiana Times

Ved Chirayath is on a mission to map the world’s oceans – University of Miami: News@theU

The University of Miami professor, National Geographic Explorer, inventor, and fashion photographer has created and developed next-generation remote sensing instruments capable of mapping the seafloor in remarkable detail.

One misstep and Ved Chirayath would have been a goner. Cut off from civilization and his cell phone useless, he knew that medical aid would never reach him in time if he were bitten by one of the countless sea snakes that surrounded him.

Theyre curious creatures, the University of Miami researcher and National Geographic Explorer said of the highly venomous snakes. Theyll swim right up to you and lick you. And when they sleep, they sleep head down in the rocks. So, my real concern was not to step on one.

But despite the very real prospect of death, Chirayath concentrated on the task at hand: mapping a colony of stromatolites in Australias snake-infested Shark Bay.

He would spend the entire two months of that 2012 field campaign navigating around the deadly snakes, the thought of dying only occasionally entering his mind. His unquenchable thirst for knowledge allowed him to stay focused.

Its that same thirst that drives him today in his quest to explore Earths last unexplored frontier: its oceans.

We have mapped more of Mars and our Moon than we have of our planets seafloor, and we know more about the large-scale structure of our universe and its history than we do about the various systems in our oceans, said Chirayath, the G. Unger Vetlesen Professor of Earth Sciences at the Rosenstiel School of Marine, Atmospheric, and Earth Science. And we know so much more about our universe because we can see very far into space and in different wavelengths.

Peering into the deep ocean, however, is another matter. Light penetrates only so far below the sea surface, and ocean waves greatly distort the appearance of undersea objects.

But using a camera he invented that literally sees through ocean waves, Chirayath is removing those distortions and helping to reveal the trove of deep secrets hidden by our oceans. Mounted on a drone flying above the water, FluidCam uses a technology called Fluid Lensing to photograph and map the ocean in remarkable clarity. From American Samoa and Guam to Hawaii and Puerto Rico, he has used the device to map more than a dozen shallow marine ecosystems such as coral reefs at depths as low as 63 feet.

That still pales in comparison to the average depth of the ocean, which is nearly 4,000 meters. And 99 percent of the habitable volume of our planet is in that region, said Chirayath, who also directs the Rosenstiel Schools Aircraft Center for Earth Studies (ACES).

So, he created the more powerful MiDAR. The Multispectral Imaging, Detection, and Active Reflectance device combines FluidCam with high-intensity LED and laser light pulses to map and transmit 3D images of the sea floor at greater detail and depths. Chirayaths research will be on display April 2021 at the Universitys showcase exhibit during the eMerge Americas conference at the Miami Beach Convention Center.

Recently, he used MiDAR to conduct multispectral mapping of corals in Guam, validating the airborne images during subsequent dives.

Still, even MiDAR will not illuminate objects 4,000 meters deep. But install the device on a robot sub that can dive thousands of meters deep, and the possibilities of imaging the seafloor in the same detail and volume that satellites have mapped land are limitless, according to Chirayath.

It keeps me up at night, he said of MiDARs potential. He envisions his creation, awarded NASAs invention of the year in 2019, exploring not only the Earths deep oceans but worlds beyondfrom sampling minerals on Mars to looking for signs of life beneath the icy ocean moons like Jupiter's Europa.

Chirayaths fascination with studying and surveying the ocean deep was born out of his love of the stars.

He grew up in Los Angeles, looking up at the stars and contemplating the possibility of life on other planets. As a youngster, he would attend open house events at NASAs Jet Propulsion Laboratory in nearby Pasadena, learning from the scientists and engineers who were building the Cassini space probe that explored Saturn and its intricate rings.

I knew at 5 years old that I wanted to work for NASA and make a contribution to discovering other worlds, Chirayath said.

By the time he was a teenager, astronomy had been his passion for more than half his life. It was also an escape, a methodology, he said, to deal with some of the challenges he faced at that time. I was homeless for about three years, and I used that time to sit on top of a mountain and do as much astronomy as I could, Chirayath noted.

At 16, he detected an exoplanet one and a half times the size of Jupiter and 150 light years from Earth in the constellation Pegasus, doing so with a consumer digital camera he modified and attached to a telescope. His refashioned scope allowed him to employ the transit photometry method for detecting exoplanets. Whenever a planet passes directly between a star and its observer, it dims the stars light ever so slightly. Chirayaths modified telescope detected just such a dip in light.

Earth- and space-based observatories that look continuously at stars for weeks and even months at a time use the technique. It took Chirayath three years to locate the planet, but his patience paid off in the form of a scholarship he won and used to help study theoretical physics at Moscow State University in Russia.He later transferred to Stanford University, where he earned his undergraduate degree.

To help pay the bills while he attended college, he worked as a fashion photographer for Vogue. His pictures have also appeared in Elle, The New York Times, and Vanity Fair.

He earned his Ph.D. in aeronautics and astronautics from Stanford University, reconnecting with his passion for astronomy and always asking himself, What can I do with small telescopes? How can I make an impact? How can I develop new technologies and explore our solar system?

He came to the University of Miami in 2021 after a decade-long career at NASAs Ames Research Center, where he founded and led its Laboratory for Advanced Sensing, inventing the suite of next-generation remote sensing technologies that are now the cornerstones of his work at ACES.

While at NASA, he also created NeMO-Net, a single player video game in which players help NASA classify coral reefs. The space agency awarded Chirayath with its 2016 Equal Employment Opportunity Medal for organizing its first participation in the San Francisco LGBT Pride Parade.

His fluid lensing mapping of the ocean promises to improve the resilience of coastal areas impacted by severe storms as well as assess the effects of climate change on coastal areas around the world.

While his origins are in astronomy, today he is more of a marine scientist than an astrophysicist. Still, the two fields are incredibly similar, Chirayath pointed out. Theyre both very difficult to study and require thinking beyond our terrestrial comfort zone. I love them both, and they can easily coexist. You can have large space observatories, and they can even help one another. A lot of the technologies that Ive created were inspired by things I learned in astrophysics and applied astronomy. But theres not that curiosity for understanding our own planet in a way that there is for space, and Im hoping to change that.

He applauds the $14 billion James Webb Space Telescope, which has been taking the deepest infrared images of our universe ever taken.

But weve never invested $14 billion into an ocean observatory, into something that looks critically at a piece of the puzzle that if we miss, we do so at our own peril, Chirayath explained. Im one of the many technologists who are looking inward and saying, This is what we understand about the universe and its large-scale structure, but a lot of the questions that are being posed to understand our universe and whats in it can also be posed for the ocean. If we dont map it, if we dont understand it, if were not able to characterize it, then when it fails or changes, humans may not be a part of the future.

The University of Miami is a Titanium Sponsor of eMerge Americas. Visit the Universitys research and technology showcase April 2021 at the Miami Beach Convention Center. Registration for an Unlimited TECH Pass is free for all University of Miami students and faculty and staff members.

Read the original:

Ved Chirayath is on a mission to map the world's oceans - University of Miami: News@theU

Multinucleon transfer creates short-lived uranium isotope – Interesting Engineering

According to a report, scientists have discovered and produced a new type of uranium isotope, known as uranium-241.

This is the first time a new neutron-rich isotope of uranium has been discovered since 1979, and it was identified by researchers at the High-energy Accelerator Research Organization located in Japan. Uranium-241 is an extremely radioactive element with 92 protons and 149 neutrons, and it is predicted to have a brief half-life of around 40 minutes.

Uranium, one of the most radioactive elements, is a member of the actinide series, which includes all elements with atomic numbers between 89 and 103. Uranium-241 is known as a neutron-rich isotope because it has more neutrons than is typical for uranium isotopes. This discovery has significant implications for the study of nuclear and astrophysics, as well as our understanding of heavy elements' behavior and stability.

The researchers utilized a technique called multinucleon transfer to create uranium-241 by firing uranium-238 at platinum-198 nuclei using Japan's RIKEN accelerator. The resulting nuclei were observed to determine their mass as they traveled a certain distance through a medium. This process led to the creation of 18 new isotopes with between 143 and 150 neutrons.

The discovery of uranium-241 illustrates the capabilities of modern particle accelerators and experimental methods in advancing scientific knowledge and exploration. The collision of atomic nuclei at high speeds and energies enables the creation and study of short-lived and exotic isotopes that were previously unobservable and unobtainable.

The rest is here:

Multinucleon transfer creates short-lived uranium isotope - Interesting Engineering

Midjourney Flips the Formula with New Image-to-Text Generator – PetaPixel

Midjourney has announced a new /describe command that allows users to leverage the powerful artificial intelligence (AI) platform to transform images into words, upending Midjourneys typical procedure of converting text to images.

Paul DelSignore describes the feature on Medium, writing that describe has numerous significant benefits for a wide range of use cases.

Today we're releasing a /describe command that lets you transform images-into-words. Give it a shot! We think this tool will transform your liguistic-visual process both in terms of creative power and discovery.

Midjourney (@midjourney) April 4, 2023

One of the best aspects of the describe feature is that it should improve accessibility. For people with visual impairments, navigating the web can be challenging. Its made more accessible by Alt text elements that describe images. Creating these Alt elements manually is time-consuming, and Midjourneys describe functionality may overcome this hurdle.

Improved search functionality is beneficial to nearly every internet user. Search engines can index images more effectively when they include better and more plentiful descriptions.

DelSignore also highlights the importance of captions, as detailed captions help explain images and provide more clarity to viewers.

Image-to-text generation creates an interesting feedback loop with Midjourneys text-to-image system. While Midjourney users can already generate similar images based on a selection, image-to-text tools may make it easier to develop alternate and potentially more fruitful descriptions for the text-to-image generator.

Gonna remix one of my images I created with Element 3D on AE

Using the /describe function to see what it says on #midjourney v5 is really interesting for prompt generation so will now see what they make. pic.twitter.com/BvkL3pu3SI

GooRee (@GooRee) April 3, 2023

In its current iteration, like with its text-to-image generator, Midjourney will create four different text descriptions of an uploaded image. Its also possible to generate new variations based on a selected description. To upload a photo, users write /describe into the text field, and a drag-and-drop upload field appears.

Users can then select one of the generated descriptions and remix the uploaded image using the new text prompt. The user can also edit the text prompt, adding a new element of control to the creative process.

PetaPixel tested the feature, first using a portrait captured by editor-in-chief Jaron Schneider.

Midjourneys four generated descriptions are of varying quality.

The first two descriptions are pretty good, especially the second one. Its interesting that Midjourney described a specific Voigtlander 15mm prime lens, though, for the record, the image was shot with a Tamron 35mm f/1.8 prime. Using the second description to generate a remix leads to pretty impressive results.

Using another image by Schneider, this time a landscape image from Mono Lake in California, Midjourney again generates mostly useful text descriptions, albeit with the wrong location information about Mono Lake.

Using the third description as a remix prompt, Midjourney delivered four very realistic new images.

Midjourneys /describe tool is intriguing, even in its early state. The tool should help creators make more detailed Alt text, captions, and even different AI-generated artwork. While some parts of the descriptions are puzzling, to say the least, they show promise.

Image credits: Jaron Schneider and Midjourney

More:

Midjourney Flips the Formula with New Image-to-Text Generator - PetaPixel

Which is easier to use: Bing Image Creator or Midjourney? – Windows Central

With the creation of AI image creators, anyone can have their ideas turned into art within seconds. It almost feels like magic. However, some of these AI programs are far easier to use than others; partially due to how the programs work and partially due to whether or not free versions are available for people to experiment with.

Midjourney and Bing Image Creator are some of the best AI image generators out there right now. They both have their pros and cons, but one is clearly easier to get started with right from the get-go. Meanwhile, the other gives you more creative control.

WINNER: Without a doubt, Bing Image Creator is far more approachable than Midjourney. It doesn't require using any apps and the art-generating interface is far more beginner-friendly.

Bing Image Creator is accessed from a web browser, so you don't need to download anything. Simply log in to a Microsoft account, type your text prompt into the command box, and click Generate to get four images based on your description.

The software even remembers previous art prompts that you gave it and displays these images in a box titled Recent on the right side of the page for you to revisit.

Midjourney, meanwhile, is hosted on a Discord server, so those who aren't as familiar with this social communication platform might find it confusing. You must have a Discord account and a Midjourney subscription to generate art on this platform.

What's more, Midjourney's Discord has a less beginner-friendly interface. You must enter the slash command "/imagine" (without quotes) into a command box within either a public channel chatroom or within a private DM before the ability to enter a text prompt is even available. However, after a prompt is submitted, users typically only have to wait a few seconds to a few minutes before getting four images based on their prompt. This process might be very confusing for some, but it is easy enough to get the hang of once you know what to do.

WINNER: Since Bing Image Creator doesn't offer any editing or variation options whatsoever, the default winner is Midjourney. While it doesn't necessarily have editing options, Midjourney does give users the ability to influence and alter generated images more than Bing Image Creator does.

It's also worth noting that Midjourney tends to do better with generating accurate anatomy than Bing Image Creator does. In that way, it requires less editing and variations to get what you're wanting.

Midjourney: There aren't any inpainting or outpainting options for adding, editing, or expanding details on a Midjourney AI image. However, Midjourney does make it easier than Bing Image Creator to request specific details or to get variants on an original prompt in the hopes of getting an image that's more in line with what is envisioned.

For instance, I asked Midjourney for "a portrait of a woman eating berries in a gothic art style while surrounded by bats and roses." The program had a really hard time making berries and instead made it look like the girl was eating roses or bats. Eww.

To try and get a less Ozzy Osbourne-esque image, I clicked Midjourney's V buttons (variation) that appear under the images to generate variations of the original images. The V and U buttons are numbered to correspond with the four images generated. Upper left is one, upper right is two, bottom left is three, and bottom right is four. Where V stands for variation, U stands for upscale and creates a larger, more detailed version of the image indicated when clicked. Alternately, clicking the Refresh button makes Midjourney generate four brand-new images based on the original prompt.

Midjourney does give users the ability to influence and alter generated images more than Bing Image Creator does.

Midjourney users can also set the exact aspect ratio, resolution, and abstraction level of generated images by including these particular instructions in their prompts. However, Bing Image Creator cannot take these same commands and will always produce a 1:1 image.

Additionally, users can upload images and tell Midjourney to use those images as a reference. For instance, I could upload a portrait of myself and then ask Midjourney to create images of a pirate that are inspired by my facial features.This opens up a lot more opportunities than Bing Image Creator which doesn't give users the ability to upload images.

Bing Image Creator: Midjourney's customization abilities are severely limited, but at least there are some. The only thing Bing Image Creator users can do with a generated image is download it or share it. This can be frustrating when you have an image that is nearly perfect but could use some tweaks.

For instance, Bing Image Creator generated an image of a girl holding a frog, but her left eye and hands came out a little oddly shaped. It would be nice if there were tools to fix these things or at least a variation button to get slightly different pictures of the same image that might fix these issues. However, the only course of action is to generate a new image and potentially waste a boost credit in the process.

Image 1 of 3

WINNER: Since Midjourney no longer offers a free trial, the clear winner between the two, in this case, is Bing Image Creator. Microsoft's system is specifically designed around people using it for free and it's easy to get boost credits without spending money.

Bing Image Creator: This is a completely free AI image generator. All you need to use it is a free Microsoft account and then you can get started with it. But, as with many AI image generators, Bing Image Creator utilizes a credit system. Instead of determining if you can use the program at all, these "boost" credits simply guarantee that your image will be generated quickly. Once you run out of boosts it can take several minutes for Bing Image Creator to generate an image for you, but it will still do so.

Users start with 25 boosts which replenish weekly. What's more, users can earn five additional boosts each time they spend 500 Microsoft Reward Points, which are acquired for free by engaging with Microsoft's ecosystem. For instance, you might earn Microsoft Reward Points by reading Microsoft articles, using Microsoft Edge, taking Microsoft quizzes, and more. So you can use Bing Image Creator indefinitely as long as you have some patience.

Image 1 of 3

Midjourney: It used to be that Midjourney offered an impressive free trial, but that changed recently. Now, the only way to gain access to Midjourney is by paying for one of the three subscription plans.

There's the Basic Plan for $8 per month, Standard Plan for $24 per month, or Pro Plan for $48 per month. The prices are reduced a bit per month if you pay for the yearly subscription. Obviously, this costs more than the free Bing Image Creator, but it's a good price compared to some other AI image generators out there.

From a beginner's standpoint, Bing Image Creator is far easier to use. It does have some limitations as you cannot ask it to create variations on an image it previously generated. However, it is free and only requires a free Microsoft account to access it. What's more, it's accessed from a web browser and produces images quickly as long as you have boost credits, which replenish every week.

Midjourney, on the other hand, offers more control and can make alternate versions of an image it previously generated. This might help the program produce a better image variant if the original generation had any issues. However, Midjourney no longer offers a free trial and since it functions using slash commands within Discord, it might be confusing for some people.

See original here:

Which is easier to use: Bing Image Creator or Midjourney? - Windows Central

How to use Midjourney to generate amazing images and art – ZDNet

screenshot by Lance Whitney/ZDNET

Looking for a logo for your business, artwork for a project, or an image for a report? One way to get a helping hand is to turn to an online AI tool. You can choose from an array of sites. But one service that offers truly impressive results is Midjourney. With this AI image creator, you describe the type of image you want to by entering text. In return, the site delivers four high-quality renderings.

Initially, Midjourney offered a free trial through which you could test the service by requesting a limited number of images. Unfortunately, the site ended the free trial for now, with the CEO blaming the move on a surge of new users. That leaves you with no option but to sign up for one of the paid subscription plans.

Also: How to use Bing Image Creator (and why it's better than DALL-E 2)

A basic plan will run you $10 a month or $96 a year, a standard plan is $30 a month or $288 a year, and a pro plan is $60 a month or $576 a year. Each tier ups the speed of the responses and offers other benefits. To get a taste of Midjourney, you may want to start with the basic plan to see how it well it works for you.

Getting started with Midjourney can be confusing as you have to jump through a couple of hoops. To kick things off, go to the Midjourney website and click the link at the bottom for Join the beta.

You're then taken to the website for Discord, which provides the server on which the responses are generated. If necessary, click the button for Continue to Discord. At the sign in window, click the Register link. Enter your email address, type a username and password, and select your date of birth. Click Continue.

You can then log in with your account. Close any initial windows or messages that pop up. Instead, click the button on the left side bar for Explore public servers. Among the featured communities, look for Midjourney or type Midjourney in the search field to find it. Select Midjourney to access it. Then click the button at the top for Join Midjourney.

Access the Midjourney community

After you've joined, the left sidebar will display newbie groups under Newcomer rooms. Click one of the rooms to access it. Scroll up and down the page to see the images that the Midjourney AI bot has created for other users.

View the messages in a newbie group.

With the free trial no longer available, you'll have to subscribe to a paid plan before you can try out the Midjourney service. In the Message field at the bottom of the screen, type /subscribe and press Enter. Click the button forOpen subscription page.

Click the button to go to the subscription page.

At the subscription page, choose either yearly billing or monthly billing. Click the Subscribe button for the plan you want and then fill out the payment form. After the payment goes through, return to the Discord page and the newbie group you had accessed.

Choose the subcription plan you want.

You can now finally describe the image you want created. In the Message field at the bottom, type "/imagine" or just type "/" and then choose imagine from the menu. A prompt field then appears.

In that field, type the description of the image you need generated. Press enter. Wait at least a few seconds for the images to be fully rendered. By default, Midjourney creates four images for each request, with each one appearing in a small thumbnail.

Describe the image you want, and the AI renders four different ones.

Under the images are buttons -- U1, U2, U3, and U4 along with V1, V2, V3, and V4. The U buttons are for upscaling the image. The numbers correspond to the four different images by row. The first image is 1, the image to its right is 2, the first image on the next row is 3, and the image to its right is 4. Click the U button for the image you wish to upscale to see the effect. Scroll down the screen to see the upscaled image.

Ask the AI to upscale one of the images.

The V buttons are used to make changes to a specific image. Maybe there's a particular image you like among the four but want to see how it can be enhanced or improved. Click the V button for that image. Scroll down the screen until you see another series of four images, each one displaying a slightly different version of the image you selected.

Ask the AI to revise one of the images.

You can also play with an image that's been upscaled. Under the image, click the Make variations button to generate revisions to the image. Click the Light Upscale Redo to upscale the image slightly using the current version of Midjourney. Click the Beta Upscale Redo to upscale the image even higher using the latest beta version. Click the Web button to display the image at a larger size in a separate window.

Use different commands to further revise the image.

With the larger image, click the magnifying glass cursor to zoom in on the image. Right-click on the image, and you can use your browser's controls to save it, copy it, or email it.

View the image in a separate window where you can save it.

There are a variety of commands you can run at the bottom field to view and manage your interactions. Click in the field and type "/". Scroll down the list to see all the available commands.

View the different commands.

Finally, should you decide not to continue with Midjourney, you can cancel your subscription. Sign into your Midjourney account page. Click the Manage link next to Plan details for your plan and then select Cancel Plan. Confirm the cancellation.

Cancel your subscription.

See original here:

How to use Midjourney to generate amazing images and art - ZDNet

What is AI image generator Midjourney? The deepfake technology delighting and tricking the internet – Fox Business

SlateStone Wealth chief market strategist Kenny Polcari discusses when the Fed could cut rates and if it's too early to invest in A.I. on 'Varney & Co.'

News agencies warned consumers Tuesday to be wary of deepfake mugshots of former President Trump flooding the internet in advance of his arrival in court. But what do we know about the platform that's being used to create these realistic simulations?

Midjourney is an AI image program which creates realistic images based on text commands given by users. The company also released a "describe" feature this week which lets users transform images into words.

The company was started in 2022 and operates out of San Francisco, California with just eleven staff members, according to their website.

CEO David Holz recently told The Verge they were stopping free trials after getting an influx of users who made throwaway accounts to access them. The lowest cost plan is now $10 a month.

AI image generator Midjourney logo.

The Midjourney V5 model is the newest and most advanced model released on March 15th, the company says.

"This model has very high Coherency, excels at interpreting natural language prompts, is higher resolution, and supports advanced features like repeating patterns," the website describes.

Many dream-like artistic images created by users are shown in the user gallery. Social media users have had fun creating everything from "historical selfies" to World Cup photos shot by famous film directors.

But capability to create simulated images of real people on the platform has caused some controversy and concerns about the technology being used for nefarious purposes.

Mediaite shared one AI-generated image of Trump's mugshot that shows the former president staring offscreen in front of a grainy background. (Mediate / @TheInfiniteDude (Twitter)) (Mediate / @TheInfiniteDude (Twitter))

AI IMAGE GENERATOR MIDJOURNEY BANS DEEPFAKES OF CHINA'S XI JINPING TO MINIMIZE DRAMA

Deepfake technology, which projects a person's face and voice onto another image or video, has attracted numerous headlines in recent months.

While many deepfakes are clearly parodies created for laughs on social media, others aren't as obvious.

Photos of Pope Francis wearing a white puffer from the embattled fashion brand Balenciaga fooled millions of viewers last week. The image was created through the Midjourney app, according to Twitter warnings.

Fake images of former President Trump resisting arrest also went viral on social media last month.

TIKTOK BANS DEEPFAKES OF YOUNG PEOPLE IN UPDATED GUIDELINES

Midjourney requires users to keep the platform "PG-13" friendly. Guidelines ban adult content and gore, and "visually shocking or disturbing content." Images or text prompts that are" inherently disrespectful, aggressive, or abusive" are also not tolerated.

However, the company has faced scrutiny over its policy to disallow simulated images of Chinese President Xi Jinping. while allowing users to create fake images of other world leaders.

CLICK HERE TO READ MORE ON FOX BUSINESS

Fox News' Kendall Tietz and Andrea Vacchiano contributed to this report.

Read more:

What is AI image generator Midjourney? The deepfake technology delighting and tricking the internet - Fox Business

Artists astound with AI-generated film stills from a parallel universe – Ars Technica

Enlarge / An AI-generated image from an #aicinema still series called "Vinyl Vengeance" by Julie Wieland, created using Midjourney.

Since last year, a group of artists have been using an AI image generator called Midjourney to create still photos of films that don't exist. They call the trend "AI cinema." We spoke to one of its practitioners, Julie Wieland, and asked her about her technique, which she calls "synthography," for synthetic photography.

Last year, image synthesis models like DALL-E 2, Stable Diffusion, and Midjourney began allowing anyone with a text description (called a "prompt") to generate a still image in many different styles. The technique has been controversial among some artists, but other artists have embraced the new tools and run with them.

While anyone with a prompt can make an AI-generated image, it soon became clear that some people possessed a special talent for finessing these new AI tools to produce better content. As with painting or photography, the human creative spark is still necessary to produce notable results consistently.

Not long after the wonder of generating solo images emerged, some artists began creating multiple AI-generated images with the same themeand they did it using a wide, film-like aspect ratio. They strung them together to tell a story and posted them on Twitter with the hashtag #aicinema. Due to technological limitations, the images didn't move (yet), but the group of pictures gave the aesthetic impression that they all came from the same film.

The fun part is that these films don't exist.

The first tweet we could find that included the #aicinema tag and the familiar four film-style images with a related theme came from Jon Finger on September 28, 2022. Wieland, a graphic designer by day who has been practicing AI cinema for several months now, acknowledges Finger's pioneering role in the art form, along with another artist. "I probably saw it first from John Meta and Jon Finger," she says.

It's worth noting that the AI cinema movement in its current still-image form may be short-lived once text2video models such as Runway's Gen-2 become more capable and widespread. But for now, we'll attempt to capture the zeitgeist from this brief moment in AI time.

To get more of an inside look at the #aicinema movement, we spoke to Wieland, who's based in Germany and has racked up a sizable following on Twitter by posting eye-catching works of art generated by Midjourney. We've previously featured her work in an article about Midjourney v5, a recent upgrade to the model that added more realism.

AI art has been a fruitful field for Wieland, who feels that Midjourney not only gives her a creative outlet but speeds up her professional workflow. This interview was conducted via Twitter direct messages, and her answers have been edited for clarity and length.

An image from an AI cinema still image series called "la dolce vita" by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

An image from an AI cinema still image series called "la dolce vita" by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

An image from an AI cinema still image series called "la dolce vita" by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

An image from an AI cinema still image series called "la dolce vita" by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

Ars: What inspired you to create AI-generated film stills?

Wieland: It started out with dabbling in DALL-E when I finally got my access from being on the waitlist for a few weeks. To be honest, I don't like the "painted astronaut dog in space" aesthetic too much that was very popular in the summer of 2022, so I wanted to test what else is out there in the AI universe. I thought that photography and movie stills would be really hard to nail, but I found ways to get good results, and I used them pretty quickly in my day-to-day job as a graphic designer for mood boards and pitches.

With Midjourney, I reduced my time from looking for inspiration from Pinterest and stock sites from two days of work to maybe 24 hours, because I can generate the exact feeling I need, to get it across for clients to know how it will "feel." Onboarding illustrators, photographers, and videographers has never been easier ever since.

A photo of graphic designer Julie Wieland.

Julie Wieland

Ars: You often call yourself a "synthographer" and your artform "synthography." Can you explain why?

Wieland: In my current exploration of AI-based works I find "synthographer" to be the most logical term to apply to me personally. While photographers are able to capture real moments in time, synthographers are able to capture moments that never have and never will happen.

When asked, I usually refer to Stephan Angos words on synthography: "This new kind of camera replicates what your imagination does. It receives words and then synthesizes a picture from its experience seeing millions of other pictures. The output doesnt have a name yet, but Ill call it a synthograph (meaning synthetic drawing)."

Ars: What process do you use to create your AI cinema images?

Wieland: My process right now looks like this. I use Midjourney for the "original" or "raw" images, then do outpainting (and small inpainting) in DALL-E 2. Finally, I do editing and color correction in Adobe Photoshop or Adobe Lightroom.

An image from an AI cinema still image series by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

An image from an AI cinema still image series by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

An image from an AI cinema still image series by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

An image from an AI cinema still image series by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

Ars: Do you often encounter any particular challenges with the tools or with prompting?

Wieland: I've never run into a challenge I couldn't solve. Being pretty fluent in photo editing, I always find a way to get the images to look like I needed or wanted them. For me, it's become a tool just like Photoshop to speed up my process and helps create my visions. I skip the image search on stock sites, and I've replaced that process with prompts. The results are usually better, more accurate, and unique.

Ars: What has the reaction been like to your art?

Wieland: Quite mixed, I would say. My Twitter grew only because of posting AI content. On Instagram and Tiktok, I haven't found "my crowd" just yet, and the content feels more like it's getting ignored or brushed over. Maybe because my following is more established on graphic design and tutorials rather than photography or AI tools.

In the first months, I had a hard time with seeing my content as "art," coming from a designer's perspective, I approach my work really calculated. But in 2023, I embraced the process of creating a bit more freely, and Im also exploring different fields in the industry other than just my day-to-day job in graphic design.

The community surrounding AI photography, AI cinema, and synthography has grown quite a bit over the past few weeks and months, and I appreciate the positive feedback on Twitter a lot. I also appreciate seeing others getting inspired by my postsand vice versa, of course.

An image from an AI cinema still image series called "when we all fall asleep, where do we go?" by Julie Wieland, generated with Midjourney v4 and refined with Photoshop.

An image from an AI cinema still image series called "when we all fall asleep, where do we go?" by Julie Wieland, generated with Midjourney v4 and refined with Photoshop.

An image from an AI cinema still image series called "when we all fall asleep, where do we go?" by Julie Wieland, generated with Midjourney v4 and refined with Photoshop.

An image from an AI cinema still image series called "when we all fall asleep, where do we go?" by Julie Wieland, generated with Midjourney v4 and refined with Photoshop.

Ars: What would you say to someone who might say you are not the artist behind your works because Midjourney creates them for you?

Wieland: The people that say Midjourney is just "writing 3 words and pushing a button" are the same ones that stand in front of a Rothko painting or Duchamp Readymades and go, "Well, I could've done this too." It's about the story you're telling, not the tool you're using.

Link:

Artists astound with AI-generated film stills from a parallel universe - Ars Technica

This story about AI was written by a human: How Nelson tech experts are using new artificial intelligence tools – Nelson … – Nelson Star

Want to see how ChatGPT wrote this story? Click here.

It only took artificial intelligence seconds to replace Zan Comerford.

Comerford, the founder of Litework Marketing in Nelson, had been writing news releases throughout the day when her husband insisted on showing what ChatGPT could do. He demonstrated by asking it to write a release similar to what she had been working on.

It generated a document that was a reasonable facsimile of her work, and did in less than a minute what shed just spent hours on.

I was sunk. I was like, there goes my job, there goes the future of humanity.

Since the public version of ChatGPT launched in November, it has proven to be a generational moment in the history of the technology. Its as important as the introduction of the iPhone, or the emergence of social media.

AI is omnipresent in todays world. Its working behind the scenes every day on phones, Google searches even Netflix.

But ChatGPT is different: Its a natural language generator that pulls information from the internet to form what is the most statistically likely response to user questions or commands, known as prompts. And it does so in a human voice that can be conversational and even a little disarming.

Its responses are seemingly limited only by ones imagination. Ask it to write a short story about aliens who want ice cream and it will start typing in front of your eyes. Ask it to provide a vegetarian recipe for dinner, or a weekly exercise plan, or for the meaning of life and its there in moments.

The emergence of whats known as Generative AI has come as a surprise to people in Nelsons tech community.

Shortly after ChatGPTs release, Brad Pommen of Nelsons SMRT1 Technologies was driving with some of his employees to meetings in Vancouver. The company, which specializes in touchscreen-based vending machines, had previously tested chatbots and decided they werent very good.

But on the trip, designer Greg Coppen began playing with ChatGPT and the group soon realized its potential.

Were all just spitballing and entering this text and getting instant results, says Pommen. It doesnt replace anybody at this point, but its not far off from needing less resources and doing more with less that really caught my attention.

Its also far from a finished product. The latest version GPT-4, which was released by the American company OpenAI in mid-March, may be able to pass a bar exam, but it still makes factual errors (ask it to write your online bio and youll probably be surprised by the response). Even though it can write in Shakespearean English, its writing probably wouldnt have impressed The Bard.

Thats provided some peace for Comerford.

She began experimenting with ChatGPT and found it worked best as an idea generator. It couldnt provide inspired marketing campaigns for Comerfords clients in the tourism and cannabis industries, but it could be used to finesse her own thoughts and help Comerford overcome occasional writers block.

It was a tool, she realized, and not one that would soon take her job.

If a machine gives you the bones, then you can build from there. I havent experienced anything with AI or ChatGPT yet that I would publish without tweaking, so that makes me feel a little bit more relieved.

Pommen has come to the same conclusion.

SMRT1 has begun using ChatGPT to write grant proposals, summarize points and even build pro and con lists. What will it be able to do in a month or a year? Pommen is intrigued to find out.

Im always looking for the positive side of things. I am never focused on the negative. And I see this as just another opportunity of creativity exemplified.

Practitioners, enthusiasts, advocates and skeptics gathered for presentations on artificial intelligence at the Nelson Innovation Centre on March 29. How to use AI tools, and why they might raise ethical and practical concerns, were among the topics. Photo: Tyler Harper

The classroom is real. The teacher isnt

Keeping students engaged can be a chore for Hazel Mousley.

Mousley is an online French tutor with students ranging in age from four to 80. The younger they are, the harder it can be for Mousley to connect with them.

But they seemingly respond to AI.

One of Mousleys students is a 10-year-old girl who loves figure skating and is, unsurprisingly, not as invested in her French homework. Mousleys solution was to ask ChatGPT to write a short play in French about a girl and her stuffed elephant at a skating competition. It was a hit with the student.

The play is very short, and Im just astounded at how simple the vocabulary is and how hilarious it is.

Every tutor Mousley knows is using ChatGPT. Not only can it interpret a students poor grammar and spelling prompts, it also responds with empathy. Her students treat it like a friend and Mousley sometimes feels like she is only a witness to the lesson.

The loss of direct influence can be worth it. Mousley says the right prompts provide exercises geared at any level of language.

If I have a student learning a very specific grammar piece, it might be hard to find exercises. It would take me a long time to create an exercise on that. I can just say [to ChatGPT], Write whatever using this grammar concept as much as possible. And then, holy crow, it creates some very compelling pieces doing that.

But if ChatGPT can ask questions, it can also give students passable answers that have prompted plagiarism concerns among educators.

To illustrate this, Dr. Theresa Southam, Selkirk Colleges co-ordinator of the Teaching and Learning Centre, suggests a prompt: Ask ChatGPT to write 1,000 words on the British North America Act of 1867, which led to the creation of Canada. The AI responds with a serviceable essay in seconds.

This, Southam says, should challenge instructors to ask students more nuanced questions. When one Selkirk teacher found ChatGPT nearly passed their online course, Southam says, it encouraged them to review their material.

As soon as you get into creative and critical thinking, thats where ChatGPT has trouble and thats where we want to take our work, says Southam. We want to have creative and critical thinkers.

Southam has spotted other errors. The chatbot sometimes pulls information from sources like blogs that dont hold up to critical analysis. Its answers are pancultural and struggle with regional context. ChatGPT also doesnt have access to oral histories, and omits cultures with poor access to the internet.

ChatGPT may have answers for everything, but it cant tell you much more about your community than Wikipedia can.

Im realizing that its only one part of human collective intelligence thats being represented in the results that are getting spit out.

The master and apprentice

Abby Wilson points to four images of Kootenay Lake on her screen and begins picking them apart. One has missing reflections. Another has incomplete sun beams. None of them catch the eye.

Each is a variation on a poor picture she took from her phone of a ferry crossing the lake, then uploaded to the AI image creator Midjourney. Wilson, a Nelson-based landscape painter, can see errors in each image. But she can also see how they might be improved.

In her studio, Wilson paints her own image based on elements suggested by Midjourney. The sun breaking through clouds is now more dramatic, and the ferry is more visible. Using AI is giving her a different perspective on her own art.

I think first draft is a good way to think about it. Just making a visual variation like taking an old painting and asking how could I have made this better? Just different takes on an idea.

Midjourney, one of several AI image creators available for free or trial use, operates similar to ChatGPT. It scraps the internet for image data, then responds to text prompts by creating original art in any style you want.

But it comes with its own controversies. Midjourney uses image data without consent. So if youve put any type of visual art online, Midjourney could be using it without your knowledge.

Wilson acknowledges this and points out other shortcomings shes noticed. Its not particularly good at drawing real places (a request for images of Nelson returns with a city that captures its vibe but wouldnt fool any residents).

It also has a racial bias white people make up the majority of its image subjects.

Its trained on the images that we have out there, and the images we have are out there are based on our biases. So it reflects that back.

But Wilson is still excited by Midjourneys possibilities. Shes OK with the ethics of it so long as it is only using reference images she provides, and the only work she sells are her own paintings.

She also doesnt worry that AI art will replace her. All art is iterative Wilsons own style is influenced by the Group of Seven and her patrons know there is only one Abby Wilson.

I dont feel super threatened because the nature of the art market is that originals do have more value.

Nelson artist Abby Wilson in her studio with two pictures she created using AI tools. The painting on the easel was created after she uploaded a photo to Midjourney, then combined elements of the AIs suggested variations to paint the final version. Wilson also used ChatGPT to write a haiku for her young daughter about being nice to the familys cat, then used the haiku as a prompt to make the image with Midjourney, which she can be seen holding here. Photo: Tyler Harper

That certain something

AI has prompted an important question: should we use AI in these ways?

Some of the most influential voices in tech say no. Last month, an open letter signed by over 2,000 people including influential Canadian expert Yoshua Bengio called for a six-month pause on AI development until safety protocols are added.

Avi Phillips is of two minds about it. The owner of Transform Your Org, a Nelson-based digital services company, has been using ChatGPT to create content for social media posts and websites as well as develop an outline for an ebook. He describes his first experience with ChatGPT as magic.

The thing that I loved about the internet, initially, was anything I wanted to learn about was there available for me, and with ChatGPT Im back to that kind of child-like feeling of learning things at my speed.

Phillips also sees AI with open eyes. He worries about how it might be used to spread misinformation, which is easily done, since ChatGPT has no built-in fact checker, or for malicious activities like making deepfakes, which are images altered to recreate a persons likeness.

He also doesnt think OpenAI should have released ChatGPT to the public before it had finished development. Were all part of this guinea pig training for this AI.

Avi Phillips, owner of Transform Your Org, is seen here with his anime-style doppleganger. The image on the right was generated by Phillips using AI software.

Joe Boland, a Trail-based health coach and owner of Darn Strong Dads, uses ChatGPT to draft curriculums specific to his clients needs. Recently he asked it to write out a six-month schedule that included nutrition exercises, homework assignments and biweekly Zoom meetings.

The result impressed Boland, but hes found the AI fails when tasked with finding solutions to peoples health issues. It can fill a spreadsheet, yet cant understand people.

It reminds Boland of his time working at a call centre. He remembers answering the phone to frustrated customers who were immediately relieved to be speaking with a person and not navigating an automated system.

Thats why I dont necessarily fear that something like ChatGPT or AI could replace what we do, because there is a certain je ne sais quoi about needing to talk to somebody, especially with something as vulnerable as our health or our wellness or whatever else.

We need that human connection that I dont think AI can necessarily offer.

But its also not going away. As people race to figure out how AI can help or hinder them, Phillips says they also need to consider the deeper question of what this means for humanity.

One thing is for sure, we have to have this conversation. We cant pretend its not there. Its the new paradigm.

READ MORE:

B.C. researchers use AI to predict a cancer patients survival rate with 80% accuracy

What can ChatGPT makers new AI model GPT-4 do?

This Canadian VFX studio is using AI to cosmetically alter your favourite actors

@tyler_harper | tyler.harper@nelsonstar.com

Like us on Facebook and follow us on Twitter.

AI technology

See the article here:

This story about AI was written by a human: How Nelson tech experts are using new artificial intelligence tools - Nelson ... - Nelson Star

Revolve Just Put up 3 Billboards Created With Generative AI Tools Like Midjourney and Stable Diffusion – Business Insider

Earlier this week, the online clothing retailer Revolve put up three billboards along Interstate 10 leading to Palm Springs, in celebration of its 20th anniversary and its upcoming Revolve Festival.

What made these particular billboards different from other out-of-home creative is that they were developed using generative AI tools. The art was made by a new two-person agency called Maison Meta, who worked with Revolve's VP of creative Sara Saric. As part of the campaign, people can also buy the AI-generated clothing featured on the art.

Thanks to the generative AI explosion, anyone can create a piece of art just by typing in a prompt. But many advertisers are still working out how to build these tools into their workflow.

Using generative AI to create high-quality images for ads, especially scaled to billboard size, is a huge challenge because it's difficult to control the final result. The current iteration of these tools, for instance, is notorious for having trouble generating images of hands.

Maison Meta's team, which includes partner Nima Abbasi and founder and creative director Cyril Foiret, has developed a workflow using a series of generative AI tools including Midjourney, Stable Diffusion, and others. Foiret used this process to maintain tight control and create production-quality images that were used for three billboards and six street posters for Revolve's 20th anniversary campaign "Best Trip."

"Midjourney gives more of the 'wow' effect as far as the output, but Stable Diffusion has a lot more control over the position of the model, the lighting as well as the final composition," Foiret told Insider.

Foiret used multiple models built on top of Stable Diffusion to hone the art for the out-of-home creative. For instance, he used an extension called ControlNet, which lets artists reposition objects within the AI-generated image, tweak facial expressions, swap out faces, and fine-tune lighting. Foiret can paint over a specific area within the image to isolate where he wants the change to happen and execute it quickly with a prompt. He could tell the technology, for example, to add a motorcycle jacket to a specific person.

Foiret also used the AI tools to scale up the images to billboard size, breaking the image into tiles and making sure each tile retained its fidelity, even when it was blown up to a massive size a process called "AI upscaling." Maison Meta also used a human staffer at a company called Twisted Loupe to re-touch the final versions of the campaigns before they went to print.

There are time and cost savings when using generative AI tools, Foiret said. It took about three weeks to generate the art for the billboards, whereas scheduling a photoshoot can take months.

"You can have a campaign that goes over a million dollars, depending on which photographer you use, or which model you have for your campaign," Foiret said.

Revolve co-founder and co-CEO Michael Mente declined to give Revolve's budget for the billboards, but said it was on par with what the company had spent in the past for splashy ad campaigns.

"There are possibilities for cost savings, but we really want to put the energy and the investment to make sure it's just the best quality output," Mente said. However, he envisions re-allocating budget within an ad campaign. For instance, he can see Revolve shifting funds away from the photoshoot to some other aspect of the campaign.

"We won't have to spend on a photoshoot potentially, tens of thousands of dollars building a set," he said.

Read more from the original source:

Revolve Just Put up 3 Billboards Created With Generative AI Tools Like Midjourney and Stable Diffusion - Business Insider

Yongwook Seong reimagines Inuit community’s dwellings and … – World Architecture Community

Canadian designer Yongwook Seong has reimagined the dwellings and traditions of the Inuit communities through the role of AI tools, like Midjourney, a text-to-image software.

The series, called Nuna, are a set of examples of fictional architecture showcasing various tech-driven versions of settlements of the Inuit communities in the Arctic and subarctic created with help of Midjourney.

Aiviq House: Aiviq (walrus) turns himself/herself into a house

The dwellings, taking references from the Inuit communities' cultures and mythology, present 10 types of settlements that are shaped by locally-available materials.

Seong has used snow, ice, whale bones, Arctic and subarctic animal furs, sacred stones and earth to reflect the inhabitants' culture and traditions.

The Inuit and Yupiks are hunter-gatherers who form the largest group of Inuit-Aleut peoples, who live in Greenland, Labrador, Quebec, Nunavut, the Northwest Territories and Alaska, and who speak Inuit-Aleut languages, scattered across four countries in the Arctic region.

Arviq Pavilion: Arviq (bowhead) whale has long been an invaluable being for Inuit

For example, in the visual above, the designer has turned Arviq (bowhead) whale into a pavilion presenting smooth edges to shelter.

"As one of the most favourite beings by the creator in Inuit mythology, it provided Inuit with valuable resources for survival. The pavilion celebrates the return of Arviq and abundance of marine life," Yongwook Seong told World Architecture Community.

"Over a millennium, the Inuit have inhabited in the Arctic/Sub-Arctic areas, which include the currently northern regions of Canada," said Yongwook Seong, a Banff-based designer.

"The Inuit celebrates enriched histories that have long survived in the harsh environment. Today, Inuit cultures and traditions have remained resilient with active political activism and cultural renewal movements."

Ceremonial House

The above visual is Ceremonial House, clad in colorful 3D stones. The decorative elements of the door are also obtained by using the stone remains horizontally.

"Music and dance elevate Inuit and their spirit, and they are a medium to transcend their physical world and communicate with sacred realms and beings," according to the designer.

In the series, Seong attempts to imagine Arctic and Sub-Arctic architecture that are inspired by the Inuit traditions. The designer stated that "The Inuit have treated the land as a sacred being."

"It is the place where every animate and inanimate being is created from. Every entity is bonded with the land," he added.

"Likewise, a human being is deeply attached to the land and is therefore told to treat it as part of himself/herself."

While creating those realistic images, the designer has used only Midjourney, while Photoshop was also used to refine the images.

AI text-to-image softwares such as Midjourney, DALL-E, and Stable Diffusion push the boundaries in producing realistic architecture as AI technology tools.

Ijiraq

In the Inuit religion an Ijiraq is "a shapeshifting creature that is said to kidnap children, hide them away and abandon them."

The designer turns Ijiraq turns into a giant caribou to lure and hunt another caribou. Ijiraq is a mythical being that can transform into any form. It would be quite difficult to discern it as they can be disguised as animals or humans.

Issitoq Observatory

The word of Issitoq represents a flying eye and a deity that punishes those who break taboos. The designer has imagined the flying eye an observatory landing on the ground in search of taboo breakers.

"An alternative contemporary reflecting the Inuits living traditions and mythology"

The designer explains his particular focus on Inut communities with these words: "Forced resettlements, cultural assimilation and religious conversion left a multi-generational vacuum within their communities for the past centuries."

"The aftermath has lingered for the prolonged period, and subsequently disrupted the self-sufficient way of nomadic life," Seong told World Architecture Community.

"Today, there have existed ongoing efforts to renew Inuit cultures and traditions."

"As part of paying a tribute to the communities and their efforts, I wanted to re-imagine an alternative contemporary reflecting the Inuits living traditions and mythology," he emphasized.

Mosaic Igloo

Clad in colorful mosaic tiles, the tube-shaped dwelling, called Mosaic Igloo, is an architectural aspiration for Cultural Mosaic.

The 10 types of arctic dwellings are Nunangat Vault, Amauti House, Nanuq Den, Ijiraq, Issitoq Observatory, Arviq Pavilion, Aiviq House, Stargazing Tepee, Ceremonial House and Mosaic Igloo.

Each home, covering approximately 5 to 8 square meters, has its own identity and a sculptural look, while the minimal footprint draws attention in the visuals.

Nanuq Den

The sculptural Nanuq Den has an undulating door and provides a minimalst shelter. Nanuq or Polar Bear is a highly regarded spirit among Inuit.

As Nanuq enters into a house (den), he or she removes fur skin and transforms himself/herself into a human being inside the house.

Nunangat Vault

Nunangat Vault takes the shape of ear canal with a layered entrance. The designer has reimagined the snow as a layered vault that leads to an archive of Arctic and subarctic storytelling and traditions. The entrance invites visitors to rich oral histories of Inuit cultures and traditions.

"According to Uqalurait (An Oral History of Nunavut), any objects contains a soul (inua). And these souls travel across different beings. Inuits are told to be respectful when they hunt animals as they share the same inua," Seong continued.

"It is believed that animals sacrifice themselves to one that they find worthwhile. And Inuit are not advised to show off their catch. When they mistreat a particular animal, the offended animal would make themselves impossible to be hunted by humans," he added.

Stargazing Tepee

A tepee, or tipi, in Inuit communities is made of animal hides or bark. In this visual, the designer has proposed this snow hut, named Stargazing Tepee, to provide a warm and intimate space for stargazing.

Top image of this article is Amauti House. Amauti (or known parka) is worn by Inuit women of the eastern area of Northern Canada. The designer has turned into Amauti into a house on her burial site.

Based in Banff, Yongwook Seong is a designer, holding a Master of Architecture degree from University of British Columbia, Vancouver. Seong's interest lies in various fields including architecture, furniture design, lighting design, visual arts.

All visuals by Yongwook Seong.

> via Yongwook Seong

dwellingInuitMidjourneyYongwook Seong

Visit link:

Yongwook Seong reimagines Inuit community's dwellings and ... - World Architecture Community

Microsoft Stands At Midjourney: You Can Now Experience Its Image … – World Nation News

Microsoft has introduced a new tool that allows users to create unique images using artificial intelligence technology. The tool, called Image Creator, uses a combination of OpenAIs DALL-E technology and Edge Edge to allow users to create custom images without the need for technical knowledge.

DALL-E technology was developed by OpenAI and uses a neural network to generate images from text descriptions. For example, if a pink elephant is given the description of foot disease, DALL-E will generate an image of a pink elephant playing football.

Meanwhile, the sidebar allows users to access the Image Creator tool directly from the browser, meaning no additional software needs to be indexed or installed.

The combination of these two technologies allows users to create custom images without the need for technical knowledge. All it takes is typing a description of the image you want to create on the sidebar of Edge, and then clicking the build button.

Image Creator will take care of generating the image based on the description provided. This tool can be extremely useful for graphic designers and content editors who need to create images quickly and efficiently.

It can also be a fun tool for those who just want to experiment with AI technology. However, there are also some concerns about AI technologies and imaging.

Some experts have pointed out that DALL-Es ability to generate real images from descriptive texts can be used to create false or misleading images. For example, someone might use an image creator tool to create an image that looks real, but actually isnt.

To address these concerns, Microsoft noted that the Image Creator tool is used only for legitimate purposes and that the company is on the way to detect and prevent abuse of AI technology.

Overall, Microsofts Image Creator tool appears to be an interesting application of AI technology that could have a wide range of uses. However, it is important that it is used responsibly and that there is adequate guidance on creating false or misleading images.

Go here to read the rest:

Microsoft Stands At Midjourney: You Can Now Experience Its Image ... - World Nation News

Microsoft stands up to Midjourney: now you can try its image … – Gearrice

Microsoft has introduced a new tool that allows users to create unique images using artificial intelligence technology. The tool, called Image Creator, uses a combination of OpenAIs DALL-E technology and the sidebar of edge to allow users to create custom images without the need for technical knowledge.

The technology DALL-E It was developed by OpenAI and uses a neural network to generate images from text descriptions. For example, if given a description of a pink elephant playing soccer, DALL-E will generate an image of the pink elephant playing soccer.

Edges sidebar, meanwhile, allows users to access the Image Creator tool directly from the browser, meaning no additional software needs to be downloaded or installed.

The combination of these two technologies allows users to create custom images without the need for technical knowledge. All it takes is typing a description of the image you want to create in Edges sidebar, and then hitting the build button.

Image Creator will take care of generating an image based on the description that has been provided. This tool can be very useful for graphic designers and content editors who need to create images quickly and efficiently.

It can also be a fun tool for those who just want to experiment with AI technology. However, there are also some concerns around AI technology and imaging.

Some experts have pointed out that DALL-Es ability to generate realistic images from text descriptions could be used to create false or misleading images. For example, someone might use the Image Creator tool to create an image that looks real, but actually isnt.

To address these concerns, Microsoft has noted that the Image Creator tool is designed to be used for legitimate purposes only and that the company is working on ways to detect and prevent misuse of AI technology.

In general, Microsofts Image Creator tool appears to be a interesting application of artificial intelligence technology that could have a wide range of uses. However, it is important that it is used responsibly and that concerns around creating false or misleading images are adequately addressed.

Read more from the original source:

Microsoft stands up to Midjourney: now you can try its image ... - Gearrice