Watch an international crew of astronauts launch to the space station today – The Verge

This morning, a trio of astronauts will make their way to the International Space Station, launching on top of a Russian Soyuz rocket from Kazakhstan. The newcomers will join the three astronauts already living on board the ISS, bringing the total number of crew to six.

The incoming passengers include NASA astronaut Randy Bresnik, Italian astronaut Paolo Nespoli of the European Space Agency, and cosmonaut Sergey Ryazanskiy of Roscosmos. All three have flown to space before, and are scheduled to stay around six months on the ISS, leaving sometime in December. Theyll be arriving to Earth orbit just ahead of the solar eclipse scheduled for August 21st, which will pass over the continental United States. Bresnik told CBS News that theyll be able to monitor the eclipse, since theyll pass underneath it three times during orbit. We've got special filters for the cameras to take those pictures, Bresnik told CBS News. We'll share it right away with everybody."

The Soyuz is slated to launch today at 11:41AM ET

Bresnik and the others are slated to launch today at 11:41AM ET, or 9:41PM in Kazakhstan, and will spend six hours in orbit before docking with the ISS. Once they arrive, theyll join up with cosmonaut Fyodor Yurchikhin, as well as NASA astronauts Jack Fischer and Peggy Whitson. Whitson has been on the ISS for over eight months now, and is set to break the record for spending the most cumulative hours in space of any US astronaut before she leaves the ISS in September.

The new arrival means there will be two cosmonauts on the ISS, along with three NASA astronauts and a crew member from a partnering space agency. Its an unusual mix for the station. Typically, the ISS houses three Russian cosmonauts, and the other three include a mix of NASA astronauts and another international crew member. However, Russia recently decided to reduce the number of cosmonauts on the station to two in order to cut costs. Its only a temporary change until Russia finishes and launches a new segment to the station called the Multipurpose Laboratory Module. But in the meantime, NASA has opted to send an extra crew member to the ISS.

NASAs coverage of the launch is scheduled to begin at 10:45AM ET. Check back to watch the mission live.

Visit link:

Watch an international crew of astronauts launch to the space station today - The Verge

NASA Announces Selection Of Two Hot, Ripped Astronauts For Man-On-Man Mission To Mars – The Onion (satire)

HOUSTONAfter an exhaustive 18-month evaluation process in which an applicant pool of hundreds was narrowed down to the two very buffest candidates, NASA announced Friday that it had chosen a pair of hot, ripped astronauts to take part in the first-ever man-on-man mission to Mars.

Shirtless and oiled-up for their appearance before the press, former Air Force captain Stephen Dunhill and Malibu, CA lifeguard Blake Brawner were introduced by officials who said the two tanned studs had completed an Astronaut Corps training program that pushed them to their mental, physical, and carnal limits. NASA confirmed that the two mouthwatering male specimens possessed both the courage and the raw, insatiable lust needed to complete the landmark mission.

For centuries, humanity has gazed up at the bright red planet in the night sky and dreamed of putting a man on a man on Mars, said NASA acting administrator Robert Lightfoot Jr., explaining that the agency was confident the two hard-bodied astronauts could endure the harsh conditions and constant thrusting the six-year roundtrip mission will require. As they explore the planet and each others chiseled bodies during this mission, these two slabs of prime beefcake will advance our understanding of the universe and bring us one step closer to the day when humans build a civilization on another planet and then fuck each other hard.

These brave, horny muscleboys will be true pioneers, Lightfoot added.

Having received more than 800 rsums and modeling portfolios, officials said they invited the 25 hunkiest applicants to the Johnson Space Center for medical exams to confirm they met stringent requirements for height, weight, visual acuity, testosterone levels, and pectoral circumference. Those candidates certified as sufficiently Adonis-like and hungry for cock then reportedly participated in a flight simulation inside a replica of NASAs new Penetrator spacecraft, which has been built for the man-on-man missions planned launch in 2020.

According to sources, the prospective astronauts underwent grueling tests in which they were observed as they piloted the model spacecraft, maneuvered through the cramped cabin to check instrument panels while executing seamless reach-arounds, responded to simulated emergency scenarios, and negotiated the delicate entry into Mars atmosphere while having their testicles played with.

NASA representatives noted that candidates were also strapped to a gyroscope in the 69 position to evaluate their ability to simultaneously perform and receive oral sex while spinning rapidly along multiple axes.

Throughout the journey, from launch to landing, well be following the Penetrators progress along its charted course and monitoring the crews vital signs, including their libido level and recovery time between spectacular climaxes, said Lightfoot, adding that Mission Control will know immediately if, for example, the mens advanced blowjob techniques do not function as anticipated in a zero-gravity environment. Once on Mars, the astronauts will set up their habitation module and fix any mechanical issues with its oxygen generator, fuck swing, or water purifier.

The acting head of NASA went on to detail other preparations for the mission, such as making sure the ships payload contained adequate supplies of the calorie-rich foods formulated to quickly re-energize the men after each round of vigorous mind-blowing sex. On the planets surface, the astronauts will reportedly conduct scientific tests, collect soil samples, and, once they are sealed safely back inside the airlock, rip each others spacesuits off so they can immediately resume sucking and fucking.

Lightfoot praised the two luscious pieces of top-shelf manflesh who stood beside him at the press conference, observing that Dunhill, a decorated pilot, skilled engineer, and fellatio expert with steely blue eyes and six-pack abs, and Brawner, a part-time personal trainer with a chiseled jawline and a 10-inch penis, passed the training program with flying colors.

Soon mankind will embark upon a new frontier, one that many of us have waited for our whole lives, Lightfoot said. For those of you who want to follow the progress of our astronauts during their historic journey, please note that a continuous POV live feed will be available on NASAs website.

Here is the original post:

NASA Announces Selection Of Two Hot, Ripped Astronauts For Man-On-Man Mission To Mars - The Onion (satire)

USDA Awards $4.6 Million in Nanotechnology Research Grants – The National Law Review

Since 1996, Carla Hutton has monitored, researched, and written about regulatory and legislative issues that may potentially affect Bergeson & Campbell, P.C. (B&C) clients. She is responsible for creating a number of monthly and quarterly regulatory updates for B&C's clients, as well as other documents, such as chemical-specific global assessments of regulatory developments and trends. She authors memoranda for B&C clients on regulatory and legislative developments, providing information that is focused, timely and applicable to client initiatives. These tasks have proven invaluable to many clients, keeping them aware and abreast of developing issues so that they can respond in kind and prepare for the future of their business.

Ms. Hutton brings a wealth of experience and judgment to her work in federal, state, and international chemical regulatory and legislative issues, including green chemistry, nanotechnology, the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), the Toxic Substances Control Act (TSCA), Proposition 65, and the Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) program.

View original post here:

USDA Awards $4.6 Million in Nanotechnology Research Grants - The National Law Review

RNA used to make ‘living computers’ for nanotechnology – Digital Journal

Its an example of engineers and biologists coming together to create an innovative solution to the performing of calculations. The implications are a potential game-changer for intelligent drug design and smart drug delivery. Other fields that could be affected include green energy production, low-cost diagnostic technologies and the development of futuristic nanomachines to be used in gene-editing. The basis of the new technology is the natural interactions between nucleic acid; in this case the predictable and programmable RNA-RNA interactions. RNA is ribonucleic acid, an important molecule with long chains of nucleotides. A nucleotide contains a nitrogenous base, a ribose sugar, and a phosphate. RNA is involved with the coding, decoding, regulation, and expression of genes. This builds on earlier work where DNA and RNA, the molecules of life, where demonstrated as being able to perform computer-like computations by Leonard Adleman (University of Southern California) in 1994 (Molecular Computation of Solutions To Combinatorial Problems.)

Atomic structure of the 50S Large Subunit of the Ribosome. Proteins are colored in blue and RNA in orange. RNA is central to the synthesis of proteins.

Wikipedia / Vossman

See the original post here:

RNA used to make 'living computers' for nanotechnology - Digital Journal

What’s After FinFETs? – SemiEngineering

Chipmakers are readying their next-generation technologies based on 10nm and/or 7nm finFETs, but its still not clear how long the finFET will last, how long the 10nm and 7nm nodes for high-end devices will be extended, and what comes next.

The industry faces a multitude of uncertainties and challenges at 5nm, 3nm and beyond. Even today, traditional chip scaling continues to slow as process complexities and costs escalate at each node. As a result, fewer customers can afford to design chips around advanced nodes.

In theory, finFETs are expected to scale to 5nm as defined by Intel. (A fully-scaled 5nm process is roughly equivalent to 3nm from the foundries). Regardless of the confusing node names, the finFET likely will run out of steam when the fin width reaches 5nm. So at 5nm or beyond, chipmakers will need a new solution. Otherwise, traditional chip scaling will slow down or stop completely.

For some time, chipmakers have been exploring various transistor options for 5nm and beyond. So far, only Samsung has provided details. In May the company rolled out its technology roadmap, which includes a nanosheet FET for 4nm by 2020.

Other chipmakers also are leaning toward similar structures in the same timeframe, even though they have not publicly announced their intentions. Nanosheet FETs and another variant, nanowire FETs, fall into the gate-all-around category. Other variants include hexagonal FETs, nano-ring FETs and nanoslab FETs.

Fig. 1: Types of horizontal gate-all-around architectures. Source: Qualcomm, Synopsys, Applied Materials

For now, gate-all-around technology appears to be the most practical technology after finFETs. Its an evolutionary step from finFETs and shares many of the same process steps and tools. A lateral gate-all-around technology is basically a finFET on its side with a gate wrapped around it. Tiny wires or sheets serve as the channels.

There are other transistor options, as well. Some chipmakers are even looking at ways to scale using advanced packaging. Vendors are weighing the options and looking at the technical and economic merits of each. The finFET can scale one or two generations, said Mark Bohr, a senior fellow and director of process architecture and integration at Intel. But the question might be, Is one of the alternates a better option, whether its gate-all-around, III-V materials or tunnel FETs? If we had to, we could scale finFETs. But the question is, Is there a better option?

By III-V, Bohr is referring to a finFET with III-V materials in the channels, which can boost the mobility in devices. A tunnel FET (TFET) is a steep sub-threshold slope device that operates at low voltages.

While gate-all-around technology is gaining steam, it isnt the consensus pickyet. I wont necessarily say that, but its certainly getting a lot of attention, Bohr said in an interview. Its too early to predict which ones will be successful. But there are enough good ideas to ensure there will be a couple more generations.

Analysts, however, believe that 10nm/7nm finFETs will last for the foreseeable future. (FinFETs provide a) combination of higher performance, lower power consumption and lower cost, said Handel Jones, chief executive of International Business Strategies (IBS).

If next-generation transistors go into production at 5nm or beyond, the technology will be expensive and limited to specific apps. Gate-all-around is likely to be adopted, but the major benefits will be high performance, Jones said. At 5nm, it will cost $476 million to design a mainstream chip, compared to $349.2 million for 7nm and $62.9 million for 28nm, according to IBS.

Fig. 2: IC design costs. Source: IBS

To help customers get ahead of the curve, Semiconductor Engineering has taken a look at whats ahead and highlighted the difficult process steps.

Different options There are at least three main paths forwardbrute-force scaling, staying at mature nodes, and advanced packaging.

Those with deep pockets likely will continue down the traditional scaling path at 10nm/7nm and beyond. Gate-all-around is the leading contender beyond finFETs, at least for now. Longer term, there are other options, such as III-V finFETs, complementary FETs (CFETs), TFETs and vertical nanowires. Vertical nanowires involve stacking wires vertically.

A CFET is a more complex gate-all-around technology, where you are stacking nFET and pFET wires on top of each other. The current gate-all-around devices stack one type of wire, whether its nFET or pFET, on each other.

CFETs, TFETs and vertical nanowires are more revolutionary technologies and not expected in the short term. They will require new breakthroughs.

Fig. 3: Next-gen transistor architectures. Source: Imec/ISS.

So how will the high end play out? 7nm will be a long-lived node, said Gary Patton, chief technology officer at GlobalFoundries. FinFETs will have a lot of legs. There is still a lot of room to extend finFETs.

After finFETs, there are several options in R&D. For example, GlobalFoundries is exploring nanosheets, nanowires and vertical nanowires.

The decision and timing to go with one technology over another depends on technical and economic factors. You are trying to develop a process that is manufacturable and delivers a value proposition, Patton said. This stuff is not as straightforward as it used to be. There is a lot more vetting required.

In fact, a given technology might be in R&D for a decade. Then, based on a set of criteria, the best technologies appear in the market. Many others fall by the wayside when that happens.

To be sure, though, not all companies will require finFETs and nanowires. Most will stay with 22nm planar processes and above. Many cant afford finFETs, and its not required for analog, RF and other devices.

10nm, 7nm and 5nm sound attractive, said Walter Ng, vice president of business management at UMC. But how many can really afford it and justify the design and manufacturing expense? The demand pushing the bleeding-edge is really for a select few.

But even those at 22nm and above face some challenges. Everybody else needs to look at how they can continue to compete, Ng said. They are trying to find a way to differentiate and squeeze out costs.

Thats why many are drawn towards advanced packaging. All chips require an IC package. For example, customers can use traditional packages, such as flip-chip BGA. Advanced packaging extends that idea, integrating multiple die in the same package to create a high-performance system. 2.5D/3D and fan-outs are examples of this approach.

So whats the ultimate winner in the market? Theres not one answer, said David Fried, chief technology officer at Coventor. People are really looking for the application to drive the physical solution.

Fried pointed out that there is no one-size-fits-all solution. For example, finFETs or follow-on transistors make sense for high-end microprocessors. But for IoT devices, that may be an incorrect direction, he said. There is no one application that is driving the entire market. People have to stop searching for one answer that fits everything. A lot of different things can win all at the same time, but its going to be for different applications.

Meanwhile, looking into his crystal ball, Fried said: My suspicion is that 7nm looks pretty evolutionary. It will be finFET. If we see a change beyond finFET, it could be at 5nm. But remember, a lateral gate-all-around nanowire device is like a finFET with two extra etches. Going from a finFET to a lateral gate-all-around nanowire device is pretty evolutionary. I hope we start seeing that at 5nm. Beyond that, we dont have much visibility.

Transistor trends and processes Today, meanwhile, the finFET is the leading-edge transistor. In finFETs, the control of the current is accomplished by implementing a gate on each of the three sides of a fin.

A key spec is the gate-pitch. The gate-pitch for Intels 10nm finFET technology is 54nm, compared to 70nm for 14nm. (Intels 10nm is the equivalent to 7nm from the foundries.)

The big decision comes when the gate-pitch approaches 40nm. Based on simulations from Imec, the finFET begins to teeter at a 42nm gate-pitch. The nanowire will scale below that and still have good electrostatic control, said An Steegen, executive vice president of semiconductor technology and systems at Imec. The nanowire FET, according to Imec, has demonstrated good electrostatic control at a 36nm gate pitch. Imec has also devised a nanowire down to 9nm in diameter.

Fig. 4: Imecs tiny nanowire. Source: Imec

In general, gate-all-around provides a performance boost over finFETs, but there are several challenges, namely drive current and parasitic capacitance. Compounding the issues is a relativity new layer called the middle-of-line (MOL). The MOL connects the separate transistor and interconnect pieces using a series of contact structures. In the MOL, parasitic capacitance is problematic. It creates external resistance in various parts of the device. This includes the contact to the junction, where the low-resistance Schottky barrier and the silicide resides.

One version, a lateral nanowire FET, is where you take a finFET and chop it into pieces. Each piece becomes a tiny horizontal nanowire, which serves as the channel between a source and drain.

Nanosheet or nanoslab FETs are the other common variants. Both technologies resemble a lateral nanowire FET, but the wires are much wider and thicker.

Each version has some tradesoffs. (The nanosheet FET) is not quite as revolutionary as they might want it to sound, Intels Bohr said. Its just finFETs laid on their sides. Not sure if the value is quite as strong as nanowires.

In nanowire FETs, the gate surrounds the entire wire, enabling more control of the gate. Its this improved gate control that enables you to continue to scale the gate length, said Mike Chudzik, senior director of the Transistor and Interconnect Group at Applied Materials.

As stated above, a finFET is cut into pieces. As a result, the amount of surface area on the device decreases. You are losing that real estate of silicon, Chudzik said. Im sure you are gaining in off-current, but you are losing in overall drive current.

Thats why a nanosheet FET makes sense. Thats where you start to elongate these wires, he explained. You are gaining in volume for your drive current. In addition, you can also play tricks with the shapes of these wires or sheets to help reduce the capacitance.

Another version, the nano-ring FET, has a similar benefit. The whole idea of the nano-ring is to actually squeeze the sheets together a little bit, he said. What that does is effectively reduce the capacitance.

The first gate-all-around devices will likely have three wires. Over time, though, chipmakers will need to stack more wires on top of each other to provide more performance. We certainly dont want to introduce new device architectures that last only a node. (So the idea) is to consider stacking more nanoslabs on top of each other, he said. But you cant just keep infinitely stacking channels, because you get a lot of the same parasitic, capacitance and resistance problems as you do with taller finFETs.

In a sign of things to come, GlobalFoundries, IBM and Samsung recently presented a paper on a nanosheet FET for 5nm and 3nm. The technology is said to show better performance with a smaller footprint than finFETs.

Fig. 5: Cross-section simulation of (a) finFET, (b) nanowire, and (c) nanosheet. Source: IBM.

Using extreme ultraviolet (EUV) lithography for some layers, the nanosheet FET from the three companies has three sheets or wires. It has a gate length of 12nm and a 44nm/48nm contacted poly pitch with 5nm silicon channels. The nFET has a sub-threshold slope of 75mV/decade, while the pFET is 85mV/decade, according to the paper.

In the lab, researchers stacked nanosheets with three layers of 5nm sheet thickness and a 10nm space between them. They demonstrated inverter and SRAM layouts using single stack nanosheet structures with sheet widths from 15nm to 45nm. It has superior electrostatics and dynamic performance compared to extremely scaled finFETs with multiple threshold and isolation solutions inherited from finFET technologies. All these advantages make stacked nanosheet devices an attractive solution as a replacement of finFETs, scalable to the 5nm device node and beyond, and with less complexity in the patterning strategy, according to the paper.

Fig. 6: Stacked nanosheet process sequence and TEM. Source: IBM, Samsung, GlobalFoundries.

Generally, the process steps are similar between gate-all-around and finFETs, with some exceptions. Making a gate-all-around is challenging, however. Patterning, defect control and variability are just some of the issues.

The first step in gate-all-around differs from a finFET. In gate-all-around, the goal is to make a super-lattice structure on a substrate using an epitaxial reactor. The super-lattice consists of alternating layers of silicon-germanium (SiGe) and silicon. Ideally, a stack would consist of three layers of SiGe and three layers of silicon.

Then, like a finFET flow, the next step involves the formation of the shallow trench isolation structure. Its critical that the super-lattice has ultra-abrupt junctions between silicon germanium and silicon, Applieds Chudzik said.

Here comes the next critical step. In gate-all-around, the gate not only wraps around the channel, but it will wrap around some of the contact area. This adds capacitance to the mix. So you need to form whats called an inner spacer, where you actually separate the high-k from the source-drain region. That can be done with an ALD-type film, Chudzik said.

Then, using a replacement process, the SiGe layers are removed in the super-lattice structure. This, in turn, leaves the silicon layers with a space between them. Each silicon layer forms the basis of a nanowire.

Finally, high-k/metal-gate materials are deposited, thereby forming a gate. In effect, the gate surrounds each of the nanowires.

Mask/litho challenges Along the way, there are also a series of lithography steps. At 16nm/14nm and 10nm/7nm, chipmakers are using todays 193nm immersion lithography tools and multiple patterning.

At 7nm and/or 5nm, the industry hopes to insert EUV. In EUV, a power source converts plasma into light at 13.5nm wavelengths, enabling finer features on a chip.

Chipmakers hope to insert EUV for the most difficult parts, namely metal1 and vias. They will continue to use traditional lithography for many other steps.

EUV can reduce the cost per layer by 9% for the metal lines and 28% for vias, compared to triple patterning, according to ASML. (EUV) eliminates steps in the fab, said Michael Lercel, director of product marketing at ASML. If you look at the cost of doing multiple immersion lithography steps, coupled with the other process steps, such as cleaning and metrology, we believe that EUV is less costly per layer versus triple patterning immersion and certainly quadruple patterning and beyond.

EUV isnt ready for production, however. ASML is readying its latest EUV scannerthe NXE:3400B. Initially, the tool will ship with a 140-watt source, enabling a throughput of 100 wafers per hour (wph).

To put EUV in production, chipmakers want 250 watts, enabling 125 wph. Recently, though, ASML has developed a 250-watt source, which will be shipped early next year.

EUV resists, meanwhile, are another stumbling block. To reach the desired throughput for EUV, the industry wants EUV resists at a dose of 20mJ/cm. Good imaging seems to be more towards the 30mJ/cm to 40mJ/cm range today, said Richard Wise, technical managing director at Lam Research. So the dose is not necessarily where we would like it to be.

With a 30mJ/cm dose, for example, an EUV scanner with a 250-watt source produces 90 wph, which is below the desired 125 wph target, according to analysts.

But developing resists at the desired dose is challenging. There are a lot of fundamental physical challenges to lower that dose because of the stochastic effects in EUV, Wise said.

This involves a phenomenon called photon shot noise. A photon is a fundamental particle of light. Variations in the number of photons can impact EUV resists during the patterning process. It can cause unwanted line-edge roughness (LER), which is defined as a deviation of a feature edge from an ideal shape.

While the industry is wrestling with the resists, photomask makers are developing EUV masks. Todays optical mask consists of an opaque layer of chrome on a glass substrate. In contrast, an EUV mask is a reflective technology, which consists of alternating layers of silicon and molybdenum on a substrate.

We need EUV in order to avoid triple patterning, said Aki Fujimura, chief executive of D2S. This means that EUV masks will have a lot more main features than ArF masks, and that each of these features will be small. Since EUV more accurately reflects mask aberrations on the wafer, EUV masks need to print more of the smaller things and each more accurately.

To make EUV masks, photomask manufacturers will require some new tools. For example, they want faster e-beam mask writers. As mask features become more complex, todays single-beam e-beam tools take a longer time to pattern or write a mask. Todays e-beams are based on variable shape beam (VSB) technology.

The solution is multi-beam mask writers. Today, IMS is shipping a multi-beam mask writer for both optical and EUV masks, while NuFlare is also developing multi-beam tools.

Multi-beam will help with mask yields, turnaround times and cost. Most masks in the world will still be perfectly fine with VSB writers, Fujimura said. But the critical few will need multi-beam writing to keep the write times reasonable.

In the most likely scenario that EUV is ready for 5nm, the demand for multi-beam writing will be high for some mask layers. For example, if a mask layer contains a large number of non-orthogonal, non-45-degree features, multi-beam will be required for sure. 193i is blind to small perturbations on the mask, so Manhattanization of those patterns work fine with relatively large stepping sizes, he said. However, EUV can see much better, and that will hugely increase the shot count, making VSB writing unlikely. But these are very specialized masks for specialized chips. For the majority of mask layers, even though the number of main features on the mask will explode by factors, the number of shots needed to shoot the decorations and SRAFs will decrease substantially. An advanced VSB writer with sufficient precision may be fine for a majority of EUV masks.

Inspection/metrology challenges Inspection and metrology are also critical at 5nm and beyond. The trend toward vertical architectures introduces the challenge of buried defects for inspection and complex profiles for metrology, said Neeraj Khanna, senior director of customer engagement at KLA-Tencor. EUV will experience high-volume adoption at these nodes, driving new random and systematic defect mechanisms. Stochastic issues will drive a need for higher sampling.

What does this all mean? We expect these new architectures to drive new sets of requirements for metrology and inspection, Khanna said. The industry has to continue to innovate and extend core technologies.

Related Stories Uncertainty Grows For 5nm, 3nm What Transistors Will Look Like At 5nm Shrink Or Package? Making 2.5D, Fan-Outs Cheaper Whats Next In Scaling, Stacking

See the original post:

What's After FinFETs? - SemiEngineering

Once upon a time, the iPod Nano was AMAZING – Fast Company

In just a few short hours, the keys to the car industry's future will be handed over to the 30 customers who were first in line to buy Tesla's new Model 3. The car is Tesla's first foray into affordability, with the initial price point starting atabout half the cost of Tesla's previous models, or around $35,000althoughwith an expected $7,500 U.S. tax credit, that price falls to $27,500.Tesla boss Elon Musk has made a few announcements about the new compact car that will seat five adults,but he may have a few surprises up his sleeve. Here are three things to watch for when the Tesla 3 launches tonight:

1. Autopilot

Model 3s will come with the hardware for AutopilotTesla's partially self-driving systemalready installed in its cars, but it's unclear when that feature will be fully functional and which features customers will have to pony up additional funds for. As Bloomberg notes, Musk has hinted that some of the most exciting self-driving features, like automatic lane changing, would be available around the time that the Model 3 was ready for launch (aka now).

2. Impressive Efficiency

Some sleuths over at Electrek think they've found the average efficiency of Tesla's vehicles buried in its website coding237 Wh per mile, which would make the Model 3 extremely efficient if not one of the most efficient electric vehicles in the U.S.The car has a range of 215 miles (346 kms) on a charge.

3. New Model S and Model X

Bloomberg thinks that Tesla might not only unveil its Model 3, but also roll out updated versions of its Model S and Model X cars. Adding new features to their more expensive models is away for the company to make sure potential buyers are still drawn to their ultra-luxury cars.

ML

More here:

Once upon a time, the iPod Nano was AMAZING - Fast Company

Tech improvements are becoming so dramatic that charts are … – Financial Post

Nathaniel Bullard

Fifteen years ago, Japans Earth Simulatorwas the most powerful supercomputer on Earth. It had more than 5,000 processors. It consumed 6,400 kilowatts of electricity. It cost nearly US$400 million to build.

Two weeks ago, a computer engineer built a deep learning box, using off-the-shelf processors and components, that handily exceeds the Earth Simulators capabilities. It uses a maximum of 1 kilowatt of power. It cost US$3,122 to build.

For the first time in writing this, Im stumped for a chart. It is difficult perhaps impossible to show a 99.98 per cent reduction in energy use and a 99.99992 per cent reduction in cost in any meaningful way. It is enough to say that information technology has decreased in cost and increased in computational and energy efficiency to striking degrees.

I would argue that this dramatic improvement has a flattening, or even depressing, economic influence on energy. Dramatically reduced inputs with dramatically increasing outputs is a boon for consumers and businesses, unless those businesses sell the energy that drives those inputs. Weve already seen this: In 2007, U.S. data centres consumed 67 terawatt-hours of electricity. Today, with millions of times more computing power, they consume 72 terawatt-hours, with less than 1per cent growth forecast by 2020. Not the greatest news if youre a power utility that has imagined that more and more information technology will mean more energy demand.

Information technologys improvement over time has been largely a function of Moores Law (which is less a law than an observation). Now, with Moores Law potentially coming to its end, it would seem like the extraordinary improvements that got us from a room-sized US$400 million supercomputer to a US$3,000 desktop box in 15 years could be coming to an end, too.

If technology companies are no longer able to jam more transistors into a chip, does that mean that improvements in energy consumption will also come to an end? If chip improvements plateau, and deployment increases, can information technology find a way to provide a boost to energy demand?

I doubt it, for both hardware and software reasons.

Even as Moores Law is tapping out for general-purpose chips, hardware is becoming increasingly optimized for specific tasks. That optimization for such things as graphics processing or neural network computations for machine learning leads to greater energy efficiency, too. Google now has its own application-specific integrated circuit called the Tensor Processing Unit for machine learning. The TPU delivered 15-30x higher performance and 30-80x higher performance-per-watt than central processing units and graphics processing units.

Then there is the software that runs on that custom hardware, which has direct applications for electricity in particular. Last year, Google unleashed its DeepMind machine learning on its own data centres and managed to reduce the energy used for cooling those data centres by 40 per cent.

So, new special-purpose chips are much more energy-efficient than older general-purpose chips and those efficient chips are now used to run algorithms that make their data centres much more energy-efficient, too.

In a famous 1987 paper, the economist Robert Solow said you can see the computer age everywhere but in the productivity statistics. Today, we could say the same about the computer age and energy statistics.

Nathaniel Bullard is an energy analyst, covering technology and business model innovation and system-wide resource transitions.

Bloomberg View

Go here to see the original:

Tech improvements are becoming so dramatic that charts are ... - Financial Post

UM Board of Curators asks state for $150 million for MU project … – Columbia Missourian

COLUMBIA Mun Choi, the UM System president, was authorized Friday to send in a state funding request for educational buildings at each campus.

The Board of Curators met Friday morning in Ellis library's teleconference room, and they voted unanimously for Choi to send in the request. The request is for fiscal year 2019, which will begin on July 1, 2018, so the talks revolved around potential funding after that date.

The request listed priority projects for each of the four UM system campuses. The main item of discussion, and the most expensive of the projects, was MU's planned Translational Precision Medicine Complex.

Translational medicine focuses on research to discover new ways of diagnosing or treating health problems and also instituting those new techniques on actual patients.

With the recent big cuts made to the Missouri higher education budget, the curators said it is unlikely that these projects will be immediately funded, but they believe it is important for their request to be officially made nonetheless.

MU's translational medicine complex is expected to cost about $250 million, $100 million coming from the school and $150 million asked for from the state. It is estimated to generate over $500 million in economic impact and create 3,860 jobs, according to the appropriations request.

MU has taken preliminary steps for the new translational medicine complex to be built on the site of the former International Institute of Nano and Molecular Medicine,which was closed at the end of June.

MU is one of the 62 members of the Association of American Universities. This means it is considered a leading research institution across the U.S. and Canada. The other three UM System campuses benefit from MU's status as an AAU university, so investing in a new research facility at MU helps the whole system indirectly, curators said. For this reason, the translational medicine complex is the top capital funding priority for the system, they said.

Several times throughout the meeting the curators referenced their July 18 and 19 retreat in Columbia, which Maurice Graham, the chair of the board of curators, described as one of the most constructive curator meetings he has ever participated in. At Friday's meeting, Graham emphasized the need for the capital appropriations request to include projects from all four campuses in order to continue the spirit of intra-system cooperation that was a focus of that retreat.

All four campuses are represented in the appropriations request. The Kansas City campus and Missouri University of Science and Technology are each requesting funds for renovations of their chemistry and biological sciences buildings, and the St. Louis campus is requesting funds for "space consolidation and infrastructure."

Ryan Wrapp, the system's chief financial officer, emphasized the need for the system to move away from a reliance on state funding for school buildings. In the past, the state would fund new buildings entirely, he said, but now they need to move toward targeted fundraising, partnerships with private businesses, or fund-matching with the state.

Curator Jeffrey Layman pointed out the possibility of a future federal infrastructure bill that could provide additional funding for public universities. President Donald Trump has repeatedly called for such a bill, though lately Congress has been mainly focused on health care, and the president has not unveiled any major infrastructure bill.

Choi described the request to the state as the first step in a dialogue and as a "give and take." With the four campuses' top priorities formally submitted to the state, when there is money to spend the state will know exactly what the campuses hope to use it for, he said.

It was pointed out that, historically, funding has come to the system often when it is not expected, so the process of requesting the funding is still important even if it feels extremely unlikely.

Supervising editor is Sky Chadde.

More:

UM Board of Curators asks state for $150 million for MU project ... - Columbia Missourian

Zuckerberg, Chan give UCSF $10 million for health data research – San Francisco Chronicle

Facebook CEO Mark Zuckerberg and his wife, Dr. Priscilla Chan, will contribute $10 million to UCSF to help fund an effort to merge data on 15 million patients across five UC medical campuses into one database.

The investment highlights the interest from investors and researchers in applying artificial intelligence to health data. The goal is to detect patterns in disease development and to allow doctors to better develop treatment plans for patients.

In oncology, for instance, computers could mine patient data to try to predict whether women diagnosed with ovarian cancer who stop responding to one type of drug may be more likely to respond to another type of treatment, based on previous cases.

The $10 million contribution is separate from the commitment by the couples limited liability company, the Chan Zuckerberg Initiative, to invest $3 billion over the next decade to cure disease.

It will go toward UCSFs Institute of Computational Health Sciences. In addition to merging data from health records, it will be used to hire faculty members for the institute over the next five years, said Dr. Atul Butte, the institutes director.

Big data and machine learning is hot in medicine right now, Butte said. If you want machine learning to work, you need to see many, many cases before you can learn the patterns.

The soon-to-be-merged data is from electronic health records that are housed separately at UCSF, UCLA, UC Irvine, UC San Diego and UC Davis, dating back between five and nine years, Butte said.

While UCSF would have access to identifiable patient information, such as names, patient privacy laws require researchers to get authorization from patients, or approval from UCSFs Institutional Review Board, before accessing any identifiable data.

Artificial intelligence in health and drug development is a booming area. Emerging companies like Londons Benevolent AI, San Brunos Numerate and Menlo Parks NuMedii co-founded by Butte have attracted hundreds of millions of dollars from investors over the last several years.

Alphabets health subsidiary Verily, formerly Google Life Sciences, recently launched a study to track health information from 10,000 people. IBMs Watson Health uses algorithms to sift through patient records and research papers from medical journals to help doctors diagnose and treat diseases. Amazon has assembled a team to build tools for electronic health records data, CNBC reported this week.

The major tech titans are moving into this space at full tilt, said Dr. Eric Topol, a professor of molecular medicine at the Scripps Research Institute. They realize this has unparalleled growth potential.

Artificial intelligence in medicine is taking off because until recently there wasnt enough data to draw meaningful conclusions, experts said. But improvements in genomic sequencing and medical monitoring technology are quickly changing that. Every persons genomic sequence alone generates billions of data points. Add that to the data collected by wearable devices such as monitoring tools that track glucose levels, blood pressure, heart rhythm and other measures and researchers have a rich pool of health information to parse.

Thats why this is a particularly exciting era in medicine, Topol said. Its really about having enormous data sets, not just a one-off, but on a continuous basis.

The challenge, though, is cutting through the noise in ways that will enable physicians to zero in on individualized screening, treatment and prevention plans for patients.

Just having all this data is not so important, Topol said. Its processing it, working with it to change the future of medicine. Artificial intelligence could result in this promise of true prevention or far better treatments.

Catherine Ho is a San Francisco Chronicle staff writer. Email: cho@sfchronicle.com Twitter: @Cat_Ho

See the rest here:

Zuckerberg, Chan give UCSF $10 million for health data research - San Francisco Chronicle

Conference on integrative ayurveda – The New Indian Express

KOCHI: A two-day conference, beginning on August 6, on integrative Ayurveda and modern medicine, titled Amrita Samyogam 2017, is being held in collaboration with Amrita Universitys School of Ayurveda. More than 60 experts and 1,000 delegates from around the world will be taking part. It will be inaugurated by the Union Minister of State for AYUSH, Shripad Yasso Naik.

The event will bring together allopathic doctors, Ayurveda practitioners and modern scientists on a common platform. It will identify strategies for integrating Ayurveda with Allopathy in the management of cancer, auto-immune diseases like arthritis, diabetes, neuro-degenerative diseases, and mental health. The conference will demonstrate how integrative medicine can be made a reality through examples of clinical integration, basic science studies, and application of new technologies.

Said Prof. Shantikumar Nair, Director, Centre for Nanosciences & Molecular Medicine, Amrita University: Integrating Indias ancient tradition of Ayurveda with evidence-based modern medicine has the potential to revolutionise world healthcare. Integrative medicine is becoming a popular specialty among physicians in Western countries because of the myriad ways in which it can benefit patients. Dr Nair says that it focuses on healing the person in his entirety rather than merely treating the symptoms by investigating the root cause of illness. It is much more patient-centric and can positively impact chronic and lifestyle diseases for which modern medicine has no answer. Western medicine and Indian ancient healing sciences can be a win-win combination to effectively tackle the enormous healthcare challenges facing humanity, says Nair.

The event is expected to trigger important collaborations across the world in the field of integrative medicine, especially academic collaborations and funding opportunities.

See the original post:

Conference on integrative ayurveda - The New Indian Express

Precision Medicine Method Could Lead to New C. Diff Treatment – R & D Magazine

An experimental technique created a model that could kick-start the development process for new drugs targeting Clostridium difficile infections (C. Diff).

Scientists based at Virginia Techs Biocomplexity Institute harnessed a mix of algorithms, simulations, and machine learning to test and predict the efficacy of novel treatments for infectious and immune-mediated diseases.

A modeling system of this nature could be particularly important when it comes to predicting the progression and treatment response to C. Diff. Antibiotics are the standard form of treatment for C. Diff, but they run the risk of perpetuating drug-resistant bacterial strains.

High rates of recurrence can lead to an uptick in healthcare costs and mortality.

The research team used this model to identify a potential alternative treatment for C. Diff, which is a protein called lanthionine synthetase c-like or LANCL2.

Our modeling shows that we do not need to remove the pathogen nor directly influence inflammation in the case of CDI to have an effective treatment, said one of the study authors Andrew Leber, scientific director at BioTherapeutics, in a statement. Simply restoring immune tolerance in the gut through an LANCL2, or similar immunoregulatory pathway, or boosting the gut microbiome to allow it to naturally outcompete pathogenic C. difficile strains is effective in the absence of antibiotics.

This new model could be the first step in constructing a personalized disease treatment process for these conditions by translating preclinical results in animal models to clinical outcomes, pinpointing effective treatments, analyzing dosage effects, and forecasting patient reactions to combination therapies.

The convergence of advanced data analytics, modeling, and artificial intelligence systems with high resolution, large-scale patient data creates an opportunity to fundamentally transform how medicine will be practiced, said Josep Bassaganya-Riera, director of the Nutritional Immunology and Molecular Medicine Laboratory and CEO of BioTherapeutics, in a statement. In this study and in our continuing efforts, we aim to be a leader in this developing field of precision, personalized medicine in infectious and autoimmune diseases.

Refining this model could ultimately minimize undesirable side effects and enable maximal efficacy of treatment in response to C. Diff and similar conditions.

The study appeared in the journal Artificial Intelligence in Medicine.

Go here to read the rest:

Precision Medicine Method Could Lead to New C. Diff Treatment - R & D Magazine

Retinal Cells Regenerated in Mice – Technology Networks

Scientists have successfully regenerated cells in the retina of adult mice at the University of Washington School of Medicine in Seattle.

Their results raise the hope that someday it may be possible to repair retinas damaged by trauma, glaucoma and other eye diseases. Their efforts are part of the UW Medicine Institute for Stem Cell and Regenerative Medicine.

Many tissues of our bodies, such as our skin, can heal because they contain stem cells that can divide and differentiate into the type of cells needed to repair damaged tissue. The cells of our retinas, however, lack this ability to regenerate. As a consequence, injury to the retina often leads to permanent vision loss.

This is not the case, however, in zebrafish, which have a remarkable ability to regenerate damaged tissue, including neural tissue like the retina. This is possible because the zebrafish retina contains cells called Mller glia that harbor a gene that allows them to regenerate. When these cells sense that the retina has been injured, they turn on this gene, called Ascl1.

The gene codes for a type of protein called a transcription factor. It can affect the activity of many other genes and, therefore, have a major effect on cell function. In the case of the zebrafish, activation of Ascl1 essentially reprograms the glia into stem cells that can change to become all the cell types needed to repair the retina and restore sight.

The team of researchers in the new study were led by Tom Reh, University of Washington School of Medicine professor of biological structure. The scientists wanted see whether it was possible to use this gene to reprogram Mller glia in adult mice. The researchers hoped to prompt a regeneration that doesn't happen naturally in mammal's retina.

Their research findings appear online July 26 in the journal Nature. The lead author is Nikolas Jorstad, a doctoral student in biological structure and in the Molecular Medicine and Mechanisms of Disease program in the Department of Pathology.

Other UW Medicine researchers on the study are Matthew S. Wilken, Stefanie G. Wohl, Leah S. VandenBosch, Takeshi Yoshimatsu, William N. Grimes,Rachel O. Wong, all from the UW Department of Biological Structure, and Fred Rieke from the UW Department of Physiology and Biophysics and the Howard Hughes Medical Research Institute.

Like humans, mice cannot repair their retinas. Jorstad said that to conduct their experiment, the team "took a page from the zebrafish playbook." They created a mouse that had a version of the Ascl1 gene in its Mller glia. The gene was then turned on with an injection of the drug tamoxifen.

Earlier studies by the team had shown that when they activated the gene, the Mller glia would differentiated into retinal cells known as interneurons after an injury to the retina of these mice. These cells play a vital role in sight. They receive and process signals from the retina's light-detecting cells, the rods and the cones, and transmit them to another set of cells that, in turn, transfer the information to the brain.

In their earlier research, however, the researchers found that activating the gene worked only during the first two weeks after birth. Any later, and the mice could no longer repair their retinas. Reh said that at first they thought another transcription factor was involved. Eventually they determined that genes critical to the Mller glia regeneration were being blocked by molecules that bind to chromosomes. This is one way cells "lock up" genes to keep them from being activated. It is a form of epigenetic regulation -- the control of how and when parts of the genome operate.

In their new paper, Reh and his colleagues show that, by using a drug that blocks epigenetic regulation called a histone deacetylase inhibitor, activation of Ascl1 allows the Mller glia in adult mice to differentiate into functioning interneurons. The researchers demonstrated that these new interneurons integrate into the existing retina, establish connections with other retinal cells, and react normally to signals from the light-detecting retinal cells.

Reh said his team hopes to find out if there are other factors that can be activated to allow the Mller glia to regenerate into all the different cell types of the retina. If so, it might be possible, he said, to develop treatments that can repair retinal damage, which is responsible for several common causes of vision loss.

This article has been republished frommaterialsprovided by the University of Washington. Note: material may have been edited for length and content. For further information, please contact the cited source.

Reference:

Jorstad, N. L., Wilken, M. S., Grimes, W. N., Wohl, S. G., Vandenbosch, L. S., Yoshimatsu, T., . . . Reh, T. A. (2017). Stimulation of functional neuronal regeneration from Mller glia in adult mice. Nature. doi:10.1038/nature23283

Link:

Retinal Cells Regenerated in Mice - Technology Networks

Unlike Roomba, Apple confirms it won’t upload, share or sell your home data from HomePod – AppleInsider (press release) (blog)

By Mike Wuerthele Thursday, July 27, 2017, 07:04 am PT (10:04 am ET)

Following iRobot's CEO declaring that he was looking to sell collated data from the automatic robot vacuum Roomba, a user reached out to Apple to see what it had in mind on the same topic. The user was concerned because Apple advertises it will use room-mapping technology to automatically tailor the HomePod's audio to fit the space.

In an email response to the reader, Apple declared that:

Reaching out to Apple for more data regarding the statement and to confirm its authenticity, AppleInsider was told that there was "nothing at all new here" and there is no change to the company's privacy page for Siri and hardware. The privacy page states clearly that it does not collect and sell user data gleaned by Siri or other services.

While Apple's privacy policy on Siri is not new, the room sensing technology set to be introduced in the HomePod itself is. Confirmation that Apple has no intention to upload or share the data should help put privacy advocates at ease.

Apple's HomePod uses the microphones to listen not only to the user, but to the audio being played in real-time. The A8 analyzes not only the sound of the audio in the room, but "time in flight," telling the device where each wall or sound-reflecting object is in the room, and adjusts the output accordingly based on this data.

While not quite the same as how a Roomba can map a user's floors and furniture, it's still some general idea of how a person's home is laid out, or how large it is.

Unlike Apple, which plans to keep any home data stored on the device itself, Roomba collects and stores information on a user's home not only to improve its products, but also with the possibility that it could share and sell it in the future.

Roomba builder iRobot believes that the data could be of interest to Apple, Amazon, and Google to improve the data sets utilized in home automation software and services, or to suggest a new product to fill a service gap. To boost its bottom line, iRobot has started looking for customers willing to pay for that data.

As detailed this week, the 900-series Roomba automated vacuums measure the dimensions of a room, as well as furniture orientation, size of the devices, and where they are located in the room. This is accomplished by simultaneous localization and mapping (SLAM) technology.

Apple's $349 HomePod was revealed at the 2017 Worldwide Developers Conference and will ship in December. The speaker is powered by an Apple A8 chip featuring realtime acoustic modeling, audio beam-forming, and multi-channel echo cancelation. It features a subset of Siri, optimized for music consumption.

Read more:

Unlike Roomba, Apple confirms it won't upload, share or sell your home data from HomePod - AppleInsider (press release) (blog)

Whole Foods seeks art for limited-edition grocery bag – The Park Record

Whole Foods Market Park City wants to know how local artists view the old mining town.

The grocery store, which will open a new market this fall at 6598 N. Landmark Drive near the Tanger Outlets, is asking local artists to submit artwork to be featured on an exclusive, limited-edition shopping bag to celebrate the event, said Debbie LaBelle, Whole Foods metro marketing and community relations for Utah.

"We're excited to reach out to and partner with our talented pool of artists in Park City to hopefully capture the feel of the town that will be featured on our exclusive shopping bags," LaBelle said. "Whole foods will print a limited quantity of these canvas bags as part of its store opening activities and for a period of time thereafter."

The theme is Park City, and the deadline for submissions is Friday, Aug. 11. Digital artwork should be uploaded to the following site:parkcityphotographer.smugmug.com/upload/f6VtWz/WFPC.

The password is: WFPC.

"Whole Foods Market team members will select the design," LaBelle said. "The winner will be unveiled during our store opening activities during the fall. When we're 60 days out, we will announce the official opening date."

There are guidelines.

"We just wanted to make sure that the artists have some sort of framework to work with," LaBelle said. "We're really looking for an artistic representation that depicts their vision of Park City."

The bag size will be 12 inches by 12 inches.

"Images can include a drawing of Main Street, the [McPolin] Barn, a moose, the new Whole Foods store, or anything else that represents Park City to you," LaBelle said.

Art created with oil, watercolor, pen and ink will be accepted.

"The only thing we're not looking for is a full-on photograph," LaBelle said. "However, artists can manipulate an image so that it doesn't look like a photo.

"One thing I would like to say is that the artists will not lose the rights to their work."

LaBelle added that seeds for the contest were planted a few weeks ago.

"Whole Foods has a partnership with the Park City Professional Artist Association, and while I was brainstorming with [photographer] Deb DeKoff, we both cooked up the idea that it would be fantastic to celebrate local artists with a shopping bag design," LaBelle said. "The winning artist will receive a $250 Whole Foods Market gift card, along with bragging rights as the creator of the Whole Foods Park City bag."

Whole Foods is also hoping to locally produce and print the bags.

"This is a really fun project, and I hope we get a good number of people who want to participate," LaBelle said. "I want to encourage people to apply and submit their art. I can't wait to see the art that will be submitted."

Deadline for artist submissions for the Whole Foods limited edition grocery bag is Friday, Aug. 11. Digital artwork should be uploaded to the following site:parkcityphotographer.smugmug.com/upload/f6VtWz/WFPC.

Read the original here:

Whole Foods seeks art for limited-edition grocery bag - The Park Record

Here’s why today is a very sad day for music fans – JOE

The day that music died.

Some people will think of the gramophone or vinyl as the first real music player while others will suggest that tapes or CD walkmans revolutionised the music industry.

However, if you were heavily into music in the mid-noughties, an iPod was a must. You didn't have to carry around a selection of discs on your travels and there was no hassle of scratching and damaging a CD while changing them in the walkman.

The thought of being able to upload something on to this small device and that it would hold numerous CDs worth of songs was baffling but we did not question it, we just let it be.

Bono was the poster boy for the brand new music device and up until lately, his silhouette was still being used on the artist symbol in all of Apple's music products.

The first time this JOE writer ever came across the iPod was circa 2004/'05.

Vertigo had just been released by U2 and the rip-roaring chorus of "Hello, Hello... Ola" followed by the constant ending of 'Yeahs' was danced to by the band and a girl both of which had this little music player in their hand with the earphones plugged in.

Clip viaabsolutCommunication

Everyone at school wanted an iPod, even if you only had one CD. It was the cool new accessory to have but nobody could have foreseen the massive impact it would have on the music industry.

Well, 16 years since it was first launched on the market, Apple has decided to finally pull the plug on its iPod Nano and Shuffle products.

The iPod touch is still alive in some form. Apple offers a Touch with 32GB of storage, it's not a real iPod but instead, an iPhone-lite.

Therefore, 28 July marks the end of Apple's era of standalone music players.

You may be reading this article on your smartphone and wondering to yourself 'why should I care about this, can't I get all my music on this thing anyway?' and you'd be completely right to think that.

Cast your mind back though and remember just how happy you were when you tore that iPod box apart on Christmas Day, your birthday or maybe it just even a normal day.

Remember how exciting it was to upload all your music on to it for the first time? The sound was so good and loud that even the oldest of songs seemed fresh.

Like a passport or a phone, your iPod might be sitting idle in a drawer somewhere at this moment in time.

If you do know of its whereabouts, take it out and give it a blast today, for old times sake.

Read the original here:

Here's why today is a very sad day for music fans - JOE

Admission list blooper leaves Jadavpur University vulnerable to legal action – Times of India

KOLKATA: An indigenous software programme used for the first time to collate data and publish the merit list for admission to ME/MTech and interdisciplinary courses at Jadavpur University has played foul, putting the university in the dock. This is the first major goof-up in the recent past, raising hackles among the university teachers who fear the lapse makes JU prone to challenge in court.

According to JU sources, the software was used without a trial run, keeping the dean of engineering and the vice-chancellor in the dark. Even as the university has withdrawn the merit lists, many among the 2000-odd candidates have taken snapshots of both the lists.

JU VC Suranjan Das couldn't deny the lapse that gives students in the first list an option to move court even if the university makes amends in the merit lists. The VC gave a piece of his mind to members, mostly contractual teachers handling the admission process. Das has also ordered an inquiry into the lapse. The admission committee will upload the revised merit list after its meeting on August 2.

Go here to read the rest:

Admission list blooper leaves Jadavpur University vulnerable to legal action - Times of India

Play A VR MMORPG This Weekend In The Free Orbus VR Open Alpha – UploadVR

With Ready Player Ones debut trailer releasing last week at Comic Con and World of Warcraft getting fan-made environments that are explorable in VR, the concept of a VR MMO is top of mind for a lot of people. But the reality of the matter is that were a long ways away from something truly on the same level of Sword Art Online, .hack, The OASIS, or anything else like that. In the meantime, weve got games like Orbus VR.

As a made-for-VR MMORPG, Orbus VR is extremely ambitious. The games creators have been working on it diligently for a while now and recently ran a successful Kickstarter that gained them additional funding. Now, after completing Closed Alpha testing,Ad Alternum is opening Orbus VR up to a limited Open Alpha test this weekend.

Starting tomorrow, July 28th, at 12PM CT, and running for 60 hours, Orbus VR will be playable in a totally free Open Alpha state.Weve had tremendous participation and enthusiasm from the VR community during our Closed Alpha testing period, and weve made some great strides in adding new features and improvements to the game, said Riley Dutton, Lead Developer on Orbus VR, in a prepared statement. Weve been hard at work getting things ready for everyone whos interested in the game to check it out, and we cant wait to see so many new faces in-game.

If youre interested in playing then all you have to do is sign up for a free account on the games website right here. Throughout the weekend the Open Alpha will reportedly have approximately 20 hours worth of content consisting of:

You can read this first-time players guide to get a good primer on what to do before you play. Sign up for free at the link above and help the team at Ad Alternum test Orbus VR this weekend! If you play, let us know what you think of this VR MMO down in the comments below!

Tagged with: mmo, MMORPG, Orbus

Link:

Play A VR MMORPG This Weekend In The Free Orbus VR Open Alpha - UploadVR

4 Insider Internet Tips to Keep Your Business Online Rain or Shine – BOSS Magazine

Share on Pinterest Pin it

Share on Google Plus Plus

Ask most people about how the internet affects corporate productivity and they will probably bring up how social media distracts employees in the workplace. However, even as companies launch 5G networks this year, access to reliable internet still remains a key problem for modern enterprises, impeding overall productivity and ultimately impacting the bottom line.

For those of us who remember how unpredictable dial-up internet connections were in the 90s, access to the internet today may seem like a dream. However, while outages may be rare, they do happen. When they do, they can have catastrophic effects on businesses.

According to new research by Beaming, as many as 75 percent of organizations experienced internet outages during working hours in 2016. In turn, this led to thousands of dollars in lost sales and caused customer service nightmares.

So while the internet may seem like an afterthought for busy business owners in 2017, here are four tips which could help you keep your business online rain or shine.

Rethink relying on your stock wireless router.

Of all the options available, Wi-Fi is known to be the most inconsistent, and fluctuates more than fixed cable ethernet systems.

James Lay from Hummingbird Networks argues that average Wi-Fi speeds generally account for roughly 30 to 60 percent of the speed advertised by internet providers. For example, if someone pays for 10 Mbps, they are most likely to receive an average speed of between three and four Mbps.

With most packages, internet service providers (ISP) rent their customers the same stock wireless router as part of their package. It would be easy to assume that these modems are programmed to offer the highest speeds possible

However, these stock modems broadcast on the same channels, which can cause interference if many people in your area all have the same router. Considering the limited options of ISP available and the amount of people living and working in busy urban areas, chances are they do.

Most routers have the channel set to auto. If you want maximum throughput and minimal interference, channels one, six, and 11 are your best choice. You can find tips as to how to choose between those three channels here.

Have a redundant internet connection for mission-critical applications and systems.

We are living in an age when pretty much every type of business functions at least partly online.

From PR companies to pizza delivery services, businesses can suffer high costs if their internet goes down for any period of time.

According to a recent report by Beaming based on 500 businesses in the UK, 3.9 million enterprises (72 percent of total businesses) suffered as many as eight internet outages or 43 hours of downtime in 2015, which accounted for lost productivity worth an estimated $15 billion.

With so many business functions moving online, internet outages can effectively cripple businesses.

According to the Beaming study, 13 percent of businesses affected said they started losing money immediately during an outage.

Large and medium-sized businesses were found to resolve internet outages the quickest. However, due to their greater reliance on the internet for internal and external business functions, they lose much more revenue than smaller companies for every hour of outage they experience.

The most effective way to avoid outage is to have a redundant, or backup, internet connection with another ISP ready for if disaster strikes. According to Beaming, only 13 percent of businesses surveyed managed an outage by switching to an alternative connection.

The cost of having a redundant connection generally pays for itself with one outage per year, especially for retail businesses. With the Beaming report estimating more than 70 percent of enterprises suffer as many as eight outages per year, a backup connection should be a no brainer for modern businesses.

When evaluating providers, take into account the technology used and upload speeds offered.

According to the 2016 Speedtest Market Report, the typical fixed broadband user in the U.S. saw average download speeds exceed 50 Mbps for the first time ever during the first six months of 2016, which accounts for a 40 percent increase since July 2015.

According to the Speedtest report, the fixed broadband industry has witnessed stark improvements over the last year due to consolidation, speed upgrades, and an increase in companies offering fiber optic deployment.

However, while new players like Google Fiber are joining industry veterans like XFINITY and Verizon in offering fiber optic connections, not every regional provider has the capability yet. Before signing on the dotted line with an ISP, its always worth doing your homework and checking what technology they are using to provide you service.

In general, we suggest you think about these technologies in tiers. From our perspective, a fiber connection is more likely to give you a stable low latency connection than a Tier 2 or 3 technology.

Tier 1: Fiber optic. This is best option, but unfortunately it isnt available in all areas.

Tier 2: Cable internet which is delivered over coaxial wires and is generally backed by a hybrid fiber network.

Tier 3: If you dont have options in tier 1 or two, look for DSL or fixed wireless options. Its not well-known, but if youre in a metropolitan area, fixed wireless providers can offer a great service, especially for a redundant service.

While ISP salesmen will promise you the moon and the stars, it is always best to ask for guarantees about upload and download speeds in your area, different types of connection which can offer the fastest, most reliable connections, and if they offer an SLA on your business connection.

Many ISPs advertise their download speeds, but some only release information about upload speeds on request. If your company works with large files such as video, webinars, and conference calls, a fast upload speed is essential.

Use Powerline Network Adaptors

While wireless internet capabilities have increased dramatically over the last decade, Wi-Fi dead zones do still exist. After all, the architecture and construction of older buildings built before the internet existed were not developed with Wi-Fi routers in mind.

The problem could be that your room in your office or home is too far from your wireless access point or due to the construction of the building: especially if steel and reinforced concrete have been used.

But its not just older buildings that suffer from dead zones. Lisa R. Melsted shared how the developers of the futuristic China Central Television (CCTV) building in Beijing found that the design and structure of the building made wireless access erratic and unusable for many of the 10,000 employees working there.

However, there is light at the end of the tunnel if you find your office or home riddled with dead zones. One option is to buy a wi-fi range extender, which is extremely affordable and can be picked up for around $30 from most leading technology stores. However, for harder-to-reach areas, powerline network adaptors can save the day.

Similar to the way data can travel over phone or cable wires, powerline devices use your house or offices existing electrical wiring to transmit data from one power outlet to another, bypassing walls and frequency interference. While a little more expensive, ranging between $60 and $100 per pair, these devices are a lifesaver for tricky spaces that cant be reached by wireless signals.

Considering how quickly internet speeds and wireless networks are evolving, many businesses assume they will be covered without doing any prior research into ISP options and coverage. However as residents of Sodo in Seattle found out in 2015, businesses can be left in the dark if ISPs drop or relocate networks, as thousands of businesses found themselves without internet when Sprint shuttered its wireless network in the area.

Fast and reliable internet is the lifeblood of the modern enterprise. At the risk of outages costing thousands of dollars in lost trade and productivity, its better to do your homework and take extra steps to make sure you and your company are covered should problems arise.

Nick Reese is the founder and CEO of FindBroadband, a new platform to help enterprises and mom and pop businesses streamline their telecoms costs.

Read the original here:

4 Insider Internet Tips to Keep Your Business Online Rain or Shine - BOSS Magazine

Meaningful Conversation Is a Crucial Part of Medicine – Scientific American (blog)

Doctor, will my child be normal?

As a pediatric cardiologist and a developmental pediatrician, this question is part of daily conversations for us. The words or silence we provide in those initial moments shape a before and after moment in parents lives. We consider and reconsider what parents need to process and to decide what is best for their children. Sometimes we are able to perceive what parents need and sometimes we make mistakes in understanding how to respond to them. Yet those moments are why we entered medicine in the first place, and they take time.

Time is a scarce resource in our current medical system. Doctors know what they need to do for their patients but often do not have the time or resources to do it. As a result, many are frustrated and leaving the profession, despite the calling they felt when they decided to become doctors in the first place. This is also the climate where new physicians are trained.

The uncertain future of the U.S. health care system underscores the deeper uncertainties physicians face in their daily conversations with concerned patients and families. In 2014 essayist and poet Meghan ORourke wrote, Ours is a technologically proficient but emotionally deficient and inconsistent medical system that is best at treating acute, not chronic, problems. As our technology has advanced, we are able to care for more individuals with chronic medical conditions. Many of the issues that are dealt with in these visits involve time and effective communication. And it is well established that patient outcomes are related to effective communication with their doctors.

Communication matters in other ways. Writer Ursula K. Le Guin has written: Words are events, they do things, change things. Her words are particularly poignant as we consider our current society where Pres. Donald Trumps tweets become daily news headlines, including his perspective on Charlie Gard, an 11-month-old child in the U.K. with a rare neurologic disease who was ordered moved to a hospice last week, where life support was to be withdrawn.

His parents had advocated that he should receive treatment for a rare mitochondrial disease in the U.S. whereas his physicians opposed further intervention. This case brought to light many issues and questions, including the best interest of the patient, financial considerations and scientific validity of a treatment. It has also highlighted the consequences from the breakdown of patient-family-physician communication. This is not a new situation but one that deserves to be revisited with attempts to understand how to make it better.

Many diagnoses such as a throat infection or pneumonia can have relatively simple treatment and follow-up care accompanied by a predictable pathway of medical management and prognosis. When the diagnosis is complex and associated with other comorbidities as often is the case for children with congenital heart disease and developmental differences, however, uncertainty can become the focus of the conversation. The future may involve multiple surgeries, therapies, educational supports, developmental delays, genetic disorders and the potential for long-term careand the conversation cannot occur in convenient time allotments. It has to allow for families to process information and revisit the questions over and over again. Most importantly, patients and families need to understand that although circumstances are difficult, there is also room for hope.

Patients, families, and physicians come to these encounters with their own expectations and lenses through which they understand communication. Culture influences these encounters, and it can quickly lead to misunderstandings and consequences, such as those manifested in Anne Fadimans 1997 book, The Spirit Catches You and You Fall Down. In addition, physicians own emotions shape these encounters, as described in Danielle Ofris 2017 book, What Doctors Feel.

Fadimans book marks a more idealistic time in our own development as physicians, where we could not imagine we would ever make those mistakes and we would make sure we spent time with patients and families so it did not happen. Ofris most recent book resonates as we reflect on how our own resolved and unresolved emotions shape our interactions with patients and families. And this can result in the breakdown of communication.

When asked about the most trying part of being a physician, our colleagues and our own responses may include the following: to cure, to heal, to fixwhile not making mistakes. This may be what is expected of us, yet the most difficult part may be not in the technical aspects but in the art of doctorpatient communication, the act of delivering difficult news. Especially if the results cannot be fixed or healed. And if this is the case, then time is one aspect that allows patients and families to be at the center of the healing relationship.

A diagnosis has meaning. It gives a name to the struggles and pain that individuals and families experience. It matters how it is delivered and who delivers it, especially when there is uncertainty and not a clear path. These conversations should provide a pathway to relieve struggles, provide support and alleviate suffering.

See the original post:

Meaningful Conversation Is a Crucial Part of Medicine - Scientific American (blog)

MU School of Medicine welcomed its most diverse class Friday … – Columbia Missourian

COLUMBIA The MU School of Medicine welcomed both a new building and new class of students Friday, marking the most diverse class in the school's history, Dean of Medicine Patrick Delafontaine said.

The school has been facing pressing issues about the lack of diversity in their student population. In 2015,only 5 percent of the medical school students were underrepresented minorities, according to previous Missourian reporting. This year, 9 percent of the class are underrepresented minorities, meaning black, Latino and Native American students, an MU Health Care spokesperson said.

Melanie Bryan, a fourth-year student at the school of medicine, said at the dedication of the new building that she was proud of not only the new structure but also the diversity of this year's class.

"(This is) a student population as diverse as our patient population," Bryan said.

In recent years, the school had to send the Liaison Committee on Medical Education, which accredits medical schools, a plan to improve its diversity issues by December 2016. If the committee had decided that there hadn't been enough improvement, the school could have been put on probation, according to previous Missourian reporting.

The school of medicine's class of 2021, welcomed Friday morning at the annual white coat ceremony, consists of 128 students. This is an increase of 32 students from last year, according to a news release.

Also, 32 percent of the incoming class are ethnic minorities, including black, Asian, Latino, Native American and Pacific Islander students. That is in addition to the 9 percent of the class who are underrepresented minorities.

This is an increase from last year, where 27 percent of the class were minorities and 8 percent were underrepresented minorities, Diamond Dixon, an MU Health Care spokesperson, said.

The Patient-Centered Care Learning Center, a new, $42.5 million medical education building, was dedicated Friday afternoon after ten years of brainstorming and construction. It was the result of a partnership between the school of medicine and two hospitals in Springfield, CoxHealth and Mercy. The Chambers of Commerce in Columbia and Springfield, as well as the state legislature, also supported the project, said Weldon Webb, the MU associate dean for Springfield Clinical Campus Implementation.

"The return on investment of this expansion is tenfold," Barbe said at the dedication ceremony. "This activity that looks like a big investment will reach broadly for many years to come."

The expansion is expected to generate jobs in the medical field that will help alleviate the shortage, he said.

"By giving students more options for clinical training in other hospitals and physician practices, we are educating them on the diverse health needs of our state and increasing the odds of putting more physicians in Springfield and southwest Missouri," Steve Edwards, president and CEO of CoxHealth, said in a statement.

Through the partnership with the two hospitals, MU's school of medicine was able to create an additional medical campus in Springfield in 2016, according to a news release. In February,nine third-year students were in Springfieldand 32 additional medical students were expected to be admitted each year as a result of the expansion.

Supervising editor is Sky Chadde.

See the article here:

MU School of Medicine welcomed its most diverse class Friday ... - Columbia Missourian