Ambarella Inc (AMBA) Stock Is a Lost Cause in Moore’s World – Investorplace.com

Popular Posts: Recent Posts:

Ambarella Inc(NASDAQ:AMBA) shares are plunging in the wake of a disappointing first-quarter earnings report. In fact, AMBA stock is off by more than 10% so far in Wednesdays trade.

Our Luke Lango, in his preview of the quarter, acknowledged the riskbut called the stock a good long-term play because of the promise its chips which first powered the GoPro Inc (NASDAQ:GPRO) camera could be a key ingredient in the next decades autonomous transportation revolution.

They think the chips will go into self-driving cars.

Everyone in tech is betting on autonomy. Apple Inc. (NASDAQ:AAPL) is said to be doing it. Alphabet Inc(NASDAQ:GOOGL) already has. Tesla Inc(NASDAQ:TSLA) has gotten rich on it. Uber stands accused of stealing it.

Ambarella could be the next booms arms merchant. The argument is that you buy the dip on the hype, wait until either the field takes off, and profit!

This veteran tech reporter has a different story to tell about AMBA stock.Come with me, then, into the cave of riches known as Moores Law.

Moores Law the idea that the chip density can double (and prices can halve) every year or two has been the driving force of technology for 50 years.

It may be the only thing the 2017 stock market believes in, given that the five most-valuable companies in the world Apple, Google, Amazon.com Inc. (NASDAQ:AMZN), Facebook Inc. (NASDAQ:FB) and Microsoft Corporation(NASDAQ:MSFT) all use it to power their clouds.

But Moores Law isnt just a story of wealth. Its also a story of wealths destruction.

Because chips get better every year, PCs rot like fruit on a warehouse floor. Their value declines rapidly with time. Companies that make chips, even mighty Intel Corporation (NASDAQ:INTC), have to race ever-faster just to keep up. Then there is Moores Second Law: As chips become more complex, their production becomes more capital-intensive. Only three of the hundreds of companies that emerged from the PC revolution are still with us Apple and Microsoft are the other two.

Most of the diamonds in Moores cave are glass. Interesting in that theyre made of the same thing silicon.

Ambarella chips see. They process incoming light or other waves into digital information, then translate that information into inputs a drone can use to avoid obstacles, or a car can use to drive itself.

They are a vital ingredient in self-driving cars, which must process all kinds of incoming data, on all kinds of frequencies, to replicate what you do every day when you decide to make that left turn into incoming traffic.

Next Page

Read the original here:

Ambarella Inc (AMBA) Stock Is a Lost Cause in Moore's World - Investorplace.com

IBM Says This Breakthrough Will Breathe New Life Into Moore’s Law – Fortune

IBM , GlobalFoundries, and Samsung said Monday that they have found a way to make thinner transistors, which should enable them to pack 30 billion switches onto a microprocessor chip the size of a fingernail.

The tech industry has been fueled for decades by the ability of chipmakers to shoehorn ever smaller, faster transistors into the chips that power laptops, servers, and mobile devices. But industry watchers have worried lately that technology was pushing the limits of Moore's Law a prediction made by Intel ( intc ) co-founder Gordon Moore in 1965 that chips could double in power every two years or less.

Get Data Sheet , Fortunes daily tech newsletter.

Monday's news means they could enable the production of smaller, 5-nanometer processors within a few years. Most of today's high-end chips sport 10 nanometer transistors. IBM said chips using this transistor technology will be 40% faster and use 75% less power than the current 10-nanometer chips. Smaller faster chips would suit artificial intelligence, virtual reality, and other compute-intensive uses, IBM said.

"This is a big change in transistor design," Jim McGregor, founder and principal analyst of Tirias Research, tells Fortune. The goal there is to keep shrinking the transistors while also improving performance, he adds.

Related: IBM Still Awaits Payoffs on Big Bets

This work, which the three companies will discuss at a tech conference in Japan this week, should show up in working chips around 2020 or beyond. "Right now we're moving from 10 nanometer to 7 nanometer, with 5-nanometer to follow," McGregor says.

Related: IBM Dumps Chip Division, Books Steep Charge

IBM sold its chip-making capabilities to GlobalFoundries three years ago, but continues to work on server chip design. So while IBM ( ibm ) is no longer a chipmaker, it works closely with chip fabrication partners and still has key talent doing transistor and processor technology research, McGregor notes.

Link:

IBM Says This Breakthrough Will Breathe New Life Into Moore's Law - Fortune

New Discovery Delays Moores Law Catastrophe Suitable Spintronic Materials Found – EconoTimes

New Discovery Delays Moores Law Catastrophe, Suitable Spintronic Materials Found

Transistors.Recklessstudios/Pixabay

For the most part, transistors found in modern processing chips are made of silicone. Unfortunately, the tech industry is fast approaching the wall that Moores Law describes. This has prompted researchers to find a better way to create computer chips, which happens to involve finding better materials. This is exactly what a team from the University of Utah have done.

The material in question is actually a group in the vein of perovskites, but with an organic-inorganic hybrid setup, Futurism reports. Using this resource, the researchers led assistant professor Sarah Li were able to bring spintronic into the veil of reality. This is notable because, up until this point, the concept was largely a theory.

Just to give a background on what this discovery means to the tech industry, spintronic is basically where the flow of information is conducted vertically, instead of horizontally. This removes much of the barrier that Moores Law presents, which states that transistors become smaller and smaller until there is no longer any space for improvement.

Before this development, many have successfully tested spintronic in a limited sense. Unfortunately, theres the small matter of actually manipulating the event in order to make it usable that prevented them from actually making it happen. The perovskites in this scenario can be manipulated and are stable enough to actually facilitate movement and storage of information.

Speaking to Newswise, Li explained as much about the properties of spintronic and why it was so hard to achieve until now. The publication then goes on to cite certain experts who say that the discovery practically amounts to a miracle.

Most people in the field would not think that this material has a long spin lifetime. Its surprising to us, too, says Li. We haven't found out the exact reason yet. But it's likely some intrinsic, magical property of the material itself.

New Study Could End Insulin Dependence Of Type-1 Diabetics

Infertility in men could point to more serious health problems later in life

Electrically stimulating your brain can boost memory but here's one reason it doesn't always work

Fainting and the summer heat: Warmer days can make you swoon, so be prepared

Why bad moods are good for you: the surprising benefits of sadness

Here's why 'cool' offices don't always make for a happier workforce

Four myths about diabetes debunked

What are 'fasting' diets and do they help you lose weight?

Placebos work even when patients know what they are

Original post:

New Discovery Delays Moores Law Catastrophe Suitable Spintronic Materials Found - EconoTimes

An IBM Breakthrough Ensures Silicon Will Keep Shrinking – WIRED

vH0z4zwR),U6=@,,Xep3s7'@@2#*rxt};xt4qglGE]s9"wg`#8Uf%n[AZ.sZ8|kog;UN=o|lG5lv3sLf9a07n[/;D+!> `]X+{Yr>X6aVW`xQ=byYzVhLb7(@F(8|K-=o%'J;v=DC:ay'$W,T{i]Ax>/={J Fq,"Q0{)jGQu<[Fhf2$%"8J>JG&7A5X*W/Z+0KW~ 3VO|W}8t?B{el~X!qWz3ylmnx'Ik|>`%xA8 yAY;u>}2,3>He N 8g'Pwks/`awSL'^XLNkK=t=uh2xC+i FNKQ2BUmRe|`b,-YW_O(),]Ww6-#J_3~[wgM|yH|]=3~98z,V$w{{V/YcgzOCJUD3" lEGIh<*8n f3@`3G"r/vt{yTTks?@~>Awh hB{_5 Aun&pp`&L8*~N S#rbXfX-Olx # 1M0@"`{s[e@<:}%$,Etz'AH#*a -?zSEQ)Ca9H3zbP6VF1(~^tBUbKq$7`PMg!`uDajF0fyk Cyz/u3eKK]oU:wRZt:pMQfmvlF(q,@f}7A I:kHzig7TT[X>!X7B5_touQ $Tw ;Qs~, >CPD!a:w}'vk74=!Sm"f^bCxhPe NITM{?}q8C^00`JPBH^rfbIZnFWdy@:;GZ1Hz+fxzjsj5_ Ji5.!BJO0Q1Vv`7wF&#k$VS 2`^V-[4R}9e*8|}jYszrWd;kT-uZT/>EYojf tZvh5j,hmDDTrw01=6{G^1=g*q0k$k60%666U9 JLmlDnOn4 utJmjv6p7.NNFV89 P:%;c:^+&&_K@2W J,6JSDLPv*UL rvq'NO5coyv8vbt?wog:2X, {bLR,.~ca~4fD1vY3y,5"3E/~?[>kF#f$ 3}V/_cJ'OFYyO*?on 4O >?[O_~*MUI4D[L3e lK|EsTTW@-h :[iOT)J=q+vO&RbZv#QX7e uy1bxDB:3w J^fHREeP6}~_ Jc URgB 2k_NV }}.c5tyab Et(52g>%7/{22!CaCM6l2l UhK=jz!~5^l*[{|~# `(hOu*; CWlUeIKMc>Za46[Knqwp`c>gX+HqG_3 2*oj5N.9EFGZCIw(NJjTP_T"Ll-AB`;kFg0EMSlA#']X{#Wa0x^H4 ysiFCvyo:Q- X=94-`LhFG 78 ,gN4{IMhIRWRWbdy4B+Zj`(cTY~5|WPl^Uc]Cs/W~#Z>E(d5FO>ht:*utmE1Evl6s9obR5eRfS1g^OhW{@G>}|eHYkC (EEDZ%qz kS5Y>BF?2yc:t/TaH-ZHCFnnIG c:0fs';i6.05IL<2qef'%Y}/0tJ {i3k (m1cX&i6x09@De&:04,oNF&r`dKE#z,;TMZt5a}z()Lvz:D$%H:Q6j{S6RW'xVv$,CRKJ=zefaWjEC{dL we_ {IDY[A3 EZ:X:&[HM+% Vq:2]bs{~,64 q$bcE G)Z-2XaZ%PmD4tn[:m"Q]#PqHK( A$C"*[co%Pv<=bns0I@c=s|T&BFnM4)c$v9Kz0+t5uK ]@@t9Ld%BBRB`Dd^L"$G FR]{iQOLlZ se_:i i}Hu!$[Y*ZRFHh0<1hR9Z0}"Rx-==l]JAn$s[P`2@7?U [h%MyC&k!}jo!=6[eZcZ[,wKDcQgbK`$? k36f=cokhVO5}'rDiNfve3ph8F!%E!%A!w}'akZVgh)`$HwRbf92_9/{y%(gi!@`ZV5Tu(!>& ONq)^-rn_)3 }K9!Rh1[I"Ip44/mCypFMF0SGW!Un8:kf^cux0} N?-gMOcC#W%t, T!=r#7~E,O4IyHB:L'l)9 ^t'gKi=CF);(kEyk((={ue>as3aB2Ex;I9q-h$^0$k#1c6kPauQ 0t#tmIL2 X<@~{I>3Zm|/XbJ{4i>J>?[2}48bF`O3b$MfzZtn[<=#pbvsu#HPm.a]uf hNQ45Gw Es1((9P1WMNWuh'MRw3m2d 1hnPOzTNrh3TxfK')!>w~A&.r-Ylm ~V-zj#$)[lQPlNITSI_vUWw%Kod;$H ksDAH16!(O)hMldkUdKN& O4 6>ZY#B51zFE~qa AFopm -!K:0QE}0AmbzL(0?3DD)b4^5EU}k"1D'G Q^}m|3`foc!ycJf7^9B%s2M- *-}IG#{{Ce#c5Wt)e%f4h4!C'wX;x!a}a_uyw}Cbi+[`?zOCuCC" O'SB89v!jn*fP(Aw/RPiusfbq<4AH@/*i!,-;iZ,DJ (IA/E.K{LG1I(=YO!O:Kd.JchMy?Km]cz]-kK"SFf!QGn49b9o@57$G?5sQpF?4f-98{54hOD OrpJrC;`pR$1}t):J5.}z"E&=h|H~QNY^)c9Y!)U(i7pmn,Lz2f-c<,!l~aeh"*N(C/t' #RtOM(PIRSMX7c[})-]GEZ'SD1 *CMohBhOyefsS."]w;H2[,kLKg!Fed /TXx&oL])s-Cs?l Qp]QP[K?yxJQXi7Y3k'&B";4,65~:':5.4CJ8>CF}tb}I:#L4TS ?pDURlJc|jIS)q*@sDbPz (j y:wqg$iB|c Hzy mf >^Wt_'b3yEM~`S!ecp4!"_6VlQU`+5 2(rXXfgJwZ*@49P4eVre9r>G!8> *kBQ,N]hG25)[*f4Q+X"TQ5zC_Hv#T9966) {LkXUv^RrOXt;!6?,7g05`PaR!)&1%0) . Q)0r^ #Y3m/3!?_7 #L"L "2^"(0sY^`R)FHWT6I{.+hV!!zJIV.:LzkhBa('1D/Q3026[w!-U&)5a}Q7C4qfpld 5te;.i]s-4-fi5Z(B}r.NfciFJG e %EX#hIR=_CULOhdh`cSafOgi,!~#oS *b$ ijFsfwF!,WuUVqU72\,jnv;ANPNpP@-J`J E?*IX<(@z!6Th`RQYL!`pS^`p3f()0M`$1kr)a$~?wd6Yygbh,]_If/nhC@ W`m#h~jG1,&FNDRU&SRi_@ @hj)m.Er LmO] vVY(/N#)E]h:M7it=Lg^=t5jAayvW;oOtf@)F4hG^+d;g|*0Jn9% nOJgrr_Z^Rnx|EPaIet%V[zIJv0_|7vfir0r1V_kV(tT1Hxu&D-m7!'Cx{ gO0qN 3ENez5,j3 F{z!U64(S8GF;MRW5]@7Y5LP 7(*|)&O6['ZRkW:`pMeqM#5.ai d>NO5T3O8$F7k ZOj5hjx_jFuN+tu`Y+h.Yo.44Asjm+!3Fs1N@~L>YuCf &M ,5O<4RicpJW@){D>c5XZeX` J0.uF_d1q%Vf%eOu^ %z+]I5uDFZM-eXp#jDYum?`W+-?Li"~Kvebq+kcg7P!l2(r{+Tv7F;+_(4wW(fn2(+S]_yuj%fKpAU;58L EuaJXY/65l!ApC+LJV | Jao$t 0kT2M:g5qnq}sndCZ`S,mH%UUff "(ix<*cG-<~7 {:"zf?Niza#Hr$=b*<(K[6hdu@%{OZ`lL+q` y.rk"I0EOjV%$bGL vXbdnF[fH F[a9YBfx ("5JQ5J}SnbIG-<.9@HAF3w}F,!@F2u@2gB}Igxp"%^dQLep0QNq7X3h2C1)v02/UeP`#^"~n41#8Ao&di&%:arndAZqg,@4$lG AF&>"''| GvziuSiv: &T5MWKz'G-b1}YMe:'!m0yH ~WeUZ#{wW<}g}a|.MB;DIGPx @o Q#2mS^3K1l@z=!(9AAC1/GWVgz -ozPxcn>;(rE?0e.OgSl=OyPE2~K4IXxBA5DUF4"?fhvB~AR>28GC10JzGejSO F7X/_*{Z+;9X#8HL0X&wt[Y0]?gH?&b,xI9< 3<"ZFih|e@e3rJgyq+ f*0G:5G1'8mx>T~0$qMRMx0^ 0: VY?/'No*"O%ah6r.[b/ 3}]1nd=&Y%!4q~pA|eTDX t~OqYOE`@d=@<:iA:uA!`YU"y( >dg5BTP5^0Fi0y*94F x<{l,""+)N%:?ffUwU.ANRjtgUeJ!t/hA^X~[x6qO ^(g]/$RxvM ^f0R.2eXz#0S;H@sX9h d(_zDL?L<) 'JWLUxSv!Vfbd'w$Lxp"qItGNDAHN1^hPtJ]k)zL &sSX%^',#"ihiV%~rNF;3BvO$`b |aF#hcSZKiOt2mgT*Yr;Lm/^ }}.c5tydJ~3 CXj>)+`;=[ q C@ho,;yn$@BcOE=FqK:GfAMsmY?7FG&j UU5ve BMOLWY**4k'jira4=](*0y"MIq)HoZietQyq!-1b9+B|*_}hz?==|{ZowgGIVCy3H`_PBt*$sV[Dq3S;#y15.F?{uqC4AAmCX|O#FS;uX^zCmfl$r1i^fsvEGU9LY]5t7R -6Rt?y gxs&1bFS6,d]NpZ3Eq'IsEe2u3V-e+J` ^(n hC}-=#iBST<{'.wa&&]16Gx2^xq>7{DcNcPmJwjyR#qp<)nlybBm$P} ;q"'.sU+ K@Lqoow:N0 ;u:*~|dvvuoYr]y@WH;DZndhV`u>hRU^"EG{1,s"Qi n"6{F9p? vmOBq.UsC%t[w-]?$5>!?f)2fb"` O<2"A8"{DX+x30]JzZCIp;N#ZvnYrW@he3IY5RMRbWNV;V^P6J`xMo+JBo7Dpr)NDr17puRAY*_{*t`Q:Spqtg;;7}=g}nIcWYr 8,E!F0z]f2y/Ddf{m.J^[-Wra]}|R(oU!v. vSyONlbc"(3;Vd]E2mR8EyL>V1=iqv8{clCkPt2.f{..0_'k<.3!.b^mgY+$H,A<>?~GNhx7 N(- =P,~RESBN)9-CyNwz)ldNH-1, x1ge^SF*QLD8j!WdD?VP^0=Hi>f"Gzni0,>szFGEp3TkEF;$pH!R)-i{`^)xd$-H8,VF(MTI9UyEB*("OI`r:JRO)(0AKdrh(hqT84"xZVNwz 'm"%"1 _JRwxpBUh[8B&[uYJzYlh|)*@kJnC,G-}{pmSSfz9d?E-WTLK`!@(!l9VD{T&.bs)3) "I"JcjD"F~CQy+iX Xg)XW@6}oll*P$xGAl!89#3A%},WAANQ34e bjCV@G=lu3r5x 7Kh;)mR}(2VM"&|`>TX+OL_dx-)f`3GQJ_G2y1%qzJa_w=8^C61Z^v bodcBDZB+ N=B]e"HpOKV$t47~2y#Ws S 6i7CGO!e)1t "3Z;qMJTyN~e4w6o*C[RyP:QR%sQiHRcM38JZ6MAfQ"AD+iW:eDeXWQ$>q%iWTW:'!$N1rQ&L%4Muxvpa^X=tAd P{8-'<6f;f@ + 2}QabsY6vRS:.{jV[FkNaBvjut t[+fKm7Zz24`4-N<{s^_0|gVtSiY"DJ4AP8eTEMArg2Vvgh4ku>)4Sk8SGPag[)4ca.HFk5ZpN6b1|S+] eUDmC6;v;2214VhyU[K8Q^KXN[yNbvh[L[A6XNw[_Kww{)4Vlgm"YKu4>9VYNA"6$,mT.lEOW e'J!Gt5;[4)]cEK*"||VS=uuZx=b^@%/|v:d/"7z{]+~qeXj4;0QwQ/:vGqKSaax> ^Hqc%}XfS}(t966kSnt5 i PNybc#Xsn5c= &kwPs?FYq&I_V>OF*y/O%Ffh6>9--4>|nz,6(7P&=S ![Z{{nmrd-j4(t_'lv+3vk[;EnPQkT:+7<;(LsmRJ}IF,MTsRyT{AJ3TI4i`Z6RG.QolE0&(TmM{|pF[4T@zQ+i[6Y0&lj dyx71Wf5RL^h?"@0nU_zy|JF3TU@&ii(Oo?vUm)f ,(:il]cZlMc*lT+ri]c*xo]cZpMcEbu"S~c h{pYe5rj)F}j6omB1`168.8C2iLiPk}I^V[^JRN]Pd{E"8mJbZgE"Gl9"/yL=N6 1zykxIWo5j_]@R-Fm_DKkDg+Ur"&n,S@:GKm=O-,0BLK7=;W0O;+>MF>S>Mrg["^!nx}].[W:oFr[nW+W_.[El_fJy!q7-J$cz(o4# PjhiP-bCYd)%Uc^[ -B"POyD8Cs8C]5{}y 5e<_ mVjWNW7sCaL@yd^8 b6cO3GQb!qUX1N5%$qO3h}|W) 5w;W0$Qy&M"z&Z(mgAP ue=G88L &bka> |1!H,Ls{D2]DsAWs)B2g_chI$|.Y?+O}YGtk2X9tW+ut-%IED 1_;6JSFH*ij4^.rTcD*N,5h3T4YT^#4y+atwz%)rV5pnr=oO gzHwNZed+tM7e1XBXtb9V9P1BF35cikYdQ&5`I Ig"[WUD"w@RfK&XT>J;2Y;cR^+QXnp&|K|7*=j>%=p{n-g9 (<^YG{.F ->c))_ peWG?Jys=L#J$`rL+k`|X0mX-MF#6$rwg{ 8}ZRL1?N9)-VI*yF>=El(f@?c/!={rjcQZ}r[ +`w-1(/nP&?tRBJBJB(5YMXYsR'Yp}l{<2 C)~]]GQ07$!1X5V^5=29yK(2.e% @ga$2=$S`$ U;R(8n,D=U%YWB@Cj!I 0TTgCNSSh{tkv[Nh:>N'v1PA>`0UteS{Pg4- X$E ,m v:(F!;tYaS~T1ycDF[g_L&Kg"@k>)X6G_#{ 9(byRh1-&F1q1c6Q(9'!s.#}adi[uAFhD{=j}` h}4Vk5% } %#G3#!)&v.Iet'S(aGe,Xdj 5(&g8!9]KHEtMPszD,vr2 %q o:4d;c&Dm@>s7r{`CJ^&,6N[kLB`T:8M1V@? 9,6i=fMj5WQ:+fHm SpvC( Q!1cL%>(f?<#DbF^ ZNeLR[&^G}dGgGt5%#3eM Zx& ':!H5`6]7oAj}tg4/504-:[rvwvwvwvwvwvwvwvwvwvwvw6-dW9NNNNV7>V9wGlwGlwGl_VvwZvwZvwZ6uZ6~F"{UBQy+G=9:>g@V2hFT337Z'r*g|z5xsOU,3R~yU%@ ,WKw9iFfL?ahy6m<`??nA:gG/KSEp@Rjx0pXBW5 BW70/6tnPW{cf"ECl6WqPrQtB,RnUFh=y)j 6X@k9=-#-Hd|-CCb u>F|2G{j5R]h[t

F?6Yn7Pj5v>"xZw[Xtp|.G5}c#S6* W |0=e^sVXT&2y<<*zN@zt myncX?GfGO1Sy K(!)36e`DmC #2X~ ?+$KjF<>%Bx6 @jO1 K`N7? h+s*3 /_eF/W!+I<` AH%OYToC=LCA"t(Jj 1C80xH,l KLyii}tE=yTv|hzfMO~5c CQOM3ohq/l6(Q|jTd$"VjUN$K oN~dp1s' q)#THM(}}oTp^ZI `O&'.,(@0O @% 8<.M-v%]ABQ5{8ZII,yC=GX$oQ0a3m4tI-bb{lf-K{O>o  G7GZl~{N(FF%xr5x#@p4@|fE~ -eNe_QG_|CZU2CKx8)S1@M5/"5}nnv-7RAV^*($#t cYevQSkk-RIbop(4"2zMi!F343X6Y0&F~"~ %#U*]T{vnC^E:_AM1c(1I#1b,*C*n7.(CX|*kv6^`#z@{U~tt*uyXn{e{Tr6-{yMFg+Jl[&r}jqB

View post:

An IBM Breakthrough Ensures Silicon Will Keep Shrinking - WIRED

GPUs to Run 1000 Times Faster by 2025 – Huang – Wall Street Pit

The CEO being referred to is NVIDIA (NASDAQ:NVDA) co-founder Jensen Huang, and he verbalized his insights on Moores Law (and life after it) during the recently concluded Computex 2017 event held in Taipei.

Moores Law, named after Intel co-founder Gordon Moore, is based on Moores observation that because the size of transistors was shrinking so rapidly, the number of transistors that could fit per square inch on integrated circuits seemed to double every year since they were invented.

Moores prediction is that such trend will continue into the future. And although the pace may have slowed down, the number of transistors that could fit per square inch did continue to increase, doubling not every year but after every 18 months instead. With this exponential growth, computers became twice as powerful, benefiting not just the consumers but the device manufacturers as well.

This went on for awhile (as predicted). But logic also dictated that sooner or later, physical limitations were bound to enter the picture, and growth would not just slacken but could possibly stop altogether. Right now, it seems were already in that state.

As Huang told analysts and reporters at the Computex event: Microprocessors no longer scale at the level of performance they used to the end of what you would call Moores Law. Semiconductor physics prevents us from taking Dennard scaling any further.

Dennard scaling named after Robert H. Dennard who co-authored the concept states that even while transistors become smaller, power density remains constant such that power consumption remains proportional with its area.

The combined effects of Moores Law and Dennard scaling has affected the semiconductor industry in such a way that only the few who can afford multibillion dollar financing could continue to go forward and push the technology further. And because there arent many who fit into this category, mergers and acquisitions will become a necessary solution to keep technology advancement from stagnating.

On NVIDIAs end, Huang assures that their venture into artificial intelligence and deep learning will keep them ahead even with the death of Moores Law. And its not by making more powerful machines, but by developing smarter machines.

Thats not to say, though, that NVIDIA will stop making their GPUs more powerful. On the contrary, as GPUs can now be considered as the core of the AI universe, NVIDIA has recently unveiled a GPU-accelerated cloud platform thats built specifically to develop deep learning models and algorithms on GPUs.

Moores Law may be dead. But according to Huang, the performance of GPUs will continue to improve, not through the increased power of transistors, but through new GPU architectures. He also says that by 2025, GPUs will perform 1,000 times better. And that is definitely something to look forward to. Even with the death of Moores Law.

View post:

GPUs to Run 1000 Times Faster by 2025 - Huang - Wall Street Pit

Moore’s Law Is Ending… So, What’s Next? – Seeker

Scientists are engineering a new, more efficient generation of computer chips by modeling them after the human brain.

Remember when all you could do on your cellphone was call, text, maybe play snake? Since then, phones got faster and smaller and around every two years, you probably upgraded your phone from 8 gigs to 16 to 32 and so on and so forth. This incremental technological progress we've all been participating in for years hinges on one key trend, called Moore's Law.

Co-founder of Intel, Gordon Moore made a prediction in 1965 that integrated circuits, or chips, were the path to cheaper electronics. Moore's law states that the number of transistors, the tiny switches that control the flow of an electrical current that can fit in an integrated circuit, will double every two years, while the cost will halve. Chip power goes up as cost goes down. That exponential growth has brought massive advances in computing power, hence tiny computers in our pockets!

Now, Moore's law isn't a law of physics, it's just a good hunch that's driven companies to make better chips. But experts are claiming that this trend is slowing down. So, to power the next wave of electronics, there are a few promising options in the works. One idea currently in the lab stage is neuromorphic computing, which are computer chips that are modeled after our own brains! They're basically capable of learning and remembering all at the same time at an incredibly fast chip.

Read more:

Moore's Law Is Ending... So, What's Next? - Seeker

AI, machine learning will shatter Moore’s Law in rapid-fire pace of innovation – Healthcare IT News

Artificial intelligence: Savvy hospitals are deploying AI and its technological brethren cognitive computing and machine learning in specific use cases at this point while industry luminaries are predicting that their advancement will soon start happening more quickly than previously anticipated.

"I've never in my career seen the acceleration of technology as fast as what we've witnessed in machine learning during the last two years," said Dale Sanders, executive vice president at Health Catalyst.

Sanders, it's worth noting, has a U.S. Air Force background working on stacked neural networks and fuzzy logic, which used to be called deep learning, as well as serving as the CIO of both Northwestern University and national health system of the Cayman Islands.

"The rate of improvement happening in machine learning," Sanders added, "is way beyond what Moore's Law is to chips."

Hospitals already deploying AI As the next generation of both patients and caregivers including clinicians, doctors, nurses, specialists, even executives and administrators starts taking a foothold in the healthcare workforce, hospitals looking for a first-mover advantage already know that AI is on the verge of becoming a critical component across the entire organization, and not just IT.

"AI and machine learning are exciting opportunities for us to accelerate," Carolinas HealthCare Chief Information and Analytics Officer Craig Richardville said. "To be successful you have to understand how that will fit within your market and your patient population, and you have to be knowledgeable about how to use it."

[Also:Hospital datacenters: Extinct in 5 years?]

Today, that means picking opportunities akin to low hanging fruit for modern AI capabilities. Carolinas, for its part, is working to develop self-service applications that provide tools to patients for self-diagnosis and self-treatment in very targeted scenarios where the science enables clinicians to understand what the right methods are, Richardville said.

The hospital is also eyeing AI to capture patient informationand bring it into data lakes or warehouses, which is paramount because Richardville said that only 20 percent of relevant patient information for Carolinas clinicians resides in its EHRs.

"Applying more intelligence to that data continues our transition from the art of medicine to the science of medicine," Richardville added.

The revenue cycle is another area ripe for machine learning, according to Stuart Hanson, senior vice president of Change Healthcare.

"Healthcare organizations have started to become more information-centric, and the next level of that is taking a personalized view," Hanson said.

Hanson cited two examples: the ability to predict what is relevant for a particular patient and deliver smart messaging, such as wellness and prevention tips and price transparency, as well as the opportunity to drive down costs associated with useless billing by better understanding how patients interact with various types of payment statements.

"There's clear ROI in the revenue cycle for physicians and hospitals," Hanson said.

While such work among payers and at Carolinas and other leading hospitals is admittedly cutting-edge, Health Catalyst's Sanders is hardly alone in believing AI, cognitive computing and machine learning will outpace the processing power advances that Moore's Law illuminated.

Read moreInnovation Pulsecolumns from Healthcare IT News.

Beyond the futuristic hypothetical Intel co-founder Gordon Moore predicted in 1965 that the then-current pace of computer chips doubling in power every year would continue into the future; Moore's Law was amended a decade later as processing power was doubling every two years. And depending on whom you ask, that rate pretty much held steady for the better part of 50 years.

Indeed, a lot has happened during the last five decades, genomics advances not least of all. When a company named 454 sequenced DNA co-discoverer James Watson's genome in 2007, it cost $2 million which was down just a dram from the $3 billion the Human Genome Project spent on its first sequencing in 2003.

"Now we're down to $1,000, and we'll get to an era of $100 per genome," said Bryce Olson, global marketing director of health and life sciences at Intel. "It's going to become the next big thing."

AI is exploding quickly. Healthcare providers will be able to diagnose disease by DNA in the near future, Olson said, andthe industry is on the verge of making the technologies faster, better and cheaper.

"We're seeing it right now in the genomic space and with machine learning algorithms," Olson added. "It's a lot faster than Moore's Law."

Twitter:SullyHIT Email the writer:tom.sullivan@himssmedia.com

Like Healthcare IT News onFacebookandLinkedIn

Read more here:

AI, machine learning will shatter Moore's Law in rapid-fire pace of innovation - Healthcare IT News

Moore’s New Law: Put your Chips on What’s Possible. – Huffington Post

When I first discovered Moores law in 1983, I realized I could use it as one of my tools to accurately predict the future of technological change. At the time, few were paying much attention to Moores law. Over the decades, the press has declared the death of Moores law, usually stating that it is impossible for scientists to make processors smaller and more powerful at the same exponential rate. This news usually comes from a tech conference where industry executives share their frustration in going to the next level. We have recently seen major news reports of this kind. I am always reminded of a great quote: The reports of my death have been greatly exaggerated.

Although that iconic remark attributed to Mark Twain is, in reality, a misquotation, it does aptly summarize the recent rebirth of Moores law.

But to my mind, the purported phoenix like rise from the ashes of one of technologys best-known principles really misses the mark so far as anticipatory thinking is concerned. We need to be asking more pertinent questions and looking at bigger issues that command greater attention.

At the risk of explaining a concept thats already widely understood, Moores lawnamed after Gordon E. Moore, co-founder of Intel and Fairchild Semiconductordeals with processing power, the speed at which a machine can perform a particular task. In 1965, Moore published a paper in which he observed that, between 1958 and 1965, the number of transistors on an integrated circuit had doubled every 18 to 24 months. At the same time, Moore noted, the price of those integrated circuits dropped by half.

Although the formula held true for some 50 years, critics have been quick to point to possible death knells over the past several years. In effect, they argue that a transistor can only be made more powerful and smaller as chips inevitably keep getting more expensive to produce.

Whenever I hear this type of prediction, I write an article reminding us all that using the word impossible is a bet against human creativity and ingenuity and they will be wrong. Case in point, last year IBM proved them all wrong by doing the impossible and introducing a new chipset, keeping Moores law going, and Intel just did it again by recently unveiling its long-anticipated Cannonlake chipset. The Intel chips are a mere 10 nanometers, down from 14 nanometers used in currently available chips.

The product debut, announced Intel CEO Brian Krzanich, underscored the reality that Moores law was still, in fact, alive, well and flourishing, as Krzanich put it.

The fact that Intels announcement, at the very least, waters down the obituaries for Moores law, is certainly good news. The fact that processing chips can continue to be manufactured to increasingly stringent specifications bodes well for anyone who uses technology in some capacity (meaning all of us).

But I also feel very strongly that it keeps us from seeing the bigger picture. Instead, we need to be asking better questions, because the factors encompassed by Moores law simply no longer matter as much as they once did. We depend less on advances in chip technology because of the exponential growth of the capabilities of the overall ecosystem of which chips, bandwidth and digital storage are merely one part.

Heres one way to look at it. Not very long ago, a laptop was largely a stand-alone device, as its storage and processing power derived solely from its chips.

Not anymore. For one thing, we now use a smart phone or tablet to access supercomputers in the cloud, allowing us to go far beyond the processing power of the individual chips in our devices. Thats how we can use powerful tools such as Apples Siri, the Amazon Echo and Google Home to tap into the capabilities of the worlds supercomputers with just a few spoken words.

Looked at another way, in the recent past, we all relied on the power of the chips in our devices, but today we have the computing power of the world in our pockets or on top of a table, and it isnt limited as it once was to the chips inside a device.

All this boils down to the fact that, despite Moores laws focus on the processing speed of the chip, computing power is no longer limited to the computational brute strength of the individual device. Its more specialized, meaning that overall computing power will continue to improve as functions such as distributed computing, digital storage, advanced bandwidth, wired and wireless, and network processing are more equitably spread out over an ecosystem of computing power.

It also comes down to looking past the surface when seemingly central issues are raised. In this case, whether Moores law is dead and buried or alive and kicking is, in many ways, less relevant when compared with other advances in technology and structure. And it begs the question: What issues and developments are you and your organization examining at a deeper level to identify game-changing insights and opportunities? Are we all paying sufficient attention to the transformational advances in the whole technology ecosystem, or needlessly focusing on just one or two elements?

Link:

Moore's New Law: Put your Chips on What's Possible. - Huffington Post

Xeon E3: A Lesson In Moore’s Law And Dennard Scaling – The Next Platform

April 6, 2017 Timothy Prickett Morgan

If you want an object lesson in the interplay between Moores Law, Dennard scaling, and the desire to make money from selling chips, you need look no further than the past several years of Intels Xeon E3 server chip product lines.

The Xeon E3 chips are illustrative particularly because Intel has kept the core count constant for these processors, which are used in a variety of gear, from workstations (remote and local), entry servers to storage controllers to microservers employed at hyperscalers and even for certain HPC workloads (like Intels own massive EDA chip design and validation farms). In a sense, it is now the Xeon E3, not the workhorse Xeon E5, that is literally driving Moores Law in terms of chip design. (Ironic, isnt it?)

In the wake of the recent Kaby Lake Xeon E3 v6 server chip announced, which we covered here in detail, we decided to take a look at how the Xeon E3 has evolved over time, complete with details tables and charts comparing the performance and price/performance of the family of single-socket server chips over their lifetime and specifically compared to the Nehalem Xeon 5500 processors from March 2009 that represent the resurgence, both economically and technically, of the Xeon platform in the datacenter after a few years of AMDs Opterons gaining considerable share.

To get started, lets just line up the feeds and speeds of the various generations of chips, ranging from the Sandy Bridge chips from 2012 through the Kaby Lake chips this year.

As we have done for past Xeon family comparisons, we have calculated the aggregate and relative oomph of each processor by multiplying the clock speeds by the core counts to give a kind of aggregate peak clocks for each chip. This is called Raw Clocks in our tables, and you can reckon a cost per gigahertz of clock speed to get a very rough relative performance metric. We have also ginned up a more precise relative performance metric, called Rel Perf, that takes into account the instructions per clock (IPC) enhancements from each Xeon core generation, and then scaled this with the clock speed enhancements and core expansion in the Xeon lines. We created this Rel Perf metric for the first time when comparing the Xeon E5 processors from the Nehalem Xeon 5500 processors in 2009 through the Broadwell Xeon E5 v4 processors that came out this time last year. We reckoned the relative performance of each processor SKU across all of the families against the performance of the Nehalem E5540, which was a four-core processor with eight threads that had a 2.53 GHz clock speed. The top-bin Broadwell Xeon E5-2699 v4 processor, which has 22 cores running at 2.2 GHz, for example, has 6.34X the performance of this baseline Nehalem E5540 processor. (Intel did not have the distinction between the E3 for uniprocessor and E5 for dual-socket machines back then.)

The relative performance metric presumes that the workload is not memory capacity or memory bandwidth constrained, of course. Meaning, it fits in a relatively small memory footprint and is not bandwidth sensitive. A lot of workloads are like this, particularly for hyperscalers and HPC shops.

Here is the full lineup of the Kaby Lake Xeons just unveiled last week:

As you can see, the top-bin Kaby Lake chip, which has four cores running at 3.9 GHz, has only 2.18X the performance of that baseline Nehalem E5540 processor. About 54 percent of that 118 percent performance increase comes from clock speeds alone, which have been enabled from the shrink from 45 nanometer to 14 nanometer processes. The rest of that performance bump (and this is really a gauge of integer performance) is due to improvements in IPC in the cores and tweaks in the cache hierarchy. Floating point performance has increased by leaps and bounds over these years, of course, and so has the integrated GPU performance, which can be used to do calculations with OpenCL if you are adventurous.

You will note that the L3 cache sized on the Xeon E3s do not change that much; it is usually 8 MB, sometimes 4 MB or 6 MB in special cases.

With the core count and L3 cache constant, and only IPC and process changing, there is just not the same room to expand performance (as measured by throughput) as can be done when you let Moores Law push the core counts up and the clock speeds down. What is interesting is that the successive process shrinks have allowed Intel to boost the bang for the buck on E3-class chips considerably over the past eight years. The Nehalem E5540, which definitely could be deployed in a single-socket machine cost $744, or $744 per unit of relative performance as we reckon it since it is the touchstone in our comparisons. As you can see, the top bin and therefore most expensive in terms of performance Kaby Lake E3-1280 v6 part costs $612, and that yields a $280 per unit of relative performance rating. That is a factor of 2.65X better bang for the buck. And for the mainstream Kaby Lake Xeon E3 chips (those that have HyperThreading activated on their cores), the price/performance is averaging around $142 per unit of relative performance, and that is a factor of 5.4X better price/performance compared to that baseline Nehalem Xeon E5540. The Broadwell Xeon E5s have a bang for the buck that ranges from a low of $159 to a high of $649 per unit of relative performance. In other words, those top bin parts in the Xeon E5 have lots more throughput, but they have lower clocks and they have not shown the same kind of price/performance improvements.

This is how Intel has really benefitted from its manufacturing process prowess. Intel has, we think, been able to wring a lot more profit out its Xeon E5 parts and the middle line (operating profits) of the Data Center Group shows it. Our point is this: Intel has profits to burn when and if ARM server chip makers and AMD with its X86 alternatives get aggressive. How it will make up profits it sacrifices to maintain market share remains to be seen. But it will take some cajoling to keep the server makers of the world in line, and this time around, the hyperscalers do exactly and precisely what the hell they want to unlike in 2003 when the Opterons plan was unveiled and 2005 when they were shipping in volume and kicking Intels tail.

Here is how the Skylake Xeon E3 processors, including specialized ones that were announced last June for datacenter and media processing uses and implemented in 14 nanometer processes like Kaby Lake Xeon E3s, line up:

The Skylake Xeon E3s were focused more on low power consumption than performance for these specialized parts, but the more standard Skylake Xeon E3s have higher wattages but not really much higher relative performance.

Things got interesting back in the Broadwell generation, shown above and also using 14 nanometer wafer etching, when Intel did regular single-socket Xeon E3 processors and also kicked out specialized Xeon D system-on-chip designs for Facebook and, we presume, others. The Xeon D chips were single socket processors, but had an integrated southbridge on the package and a lot more cores. They also had much higher price tags, and Intel charged a hefty premium for low voltage versions that had higher performance. These are not Xeon E3 processors, per se, but they are close to a Xeon E3 than they are to a Xeon E5.

Here are the Haswell and Ivy Bridge Xeon E3s, implemented in 22 nanometer processes, stack up:

And finishing up, here are the 32 nanometer Sandy Bridge Xeon E3 parts and the 45 nanometer Nehalem 5500 parts:

That is a lot of tabular data to chew on, so we made some arts and charts to get some general trends. The first is a trend line showing the performance and price/performance of the top bin Xeon E3 parts over time compared to the top bin Nehalem X5570, which had four cores running at 2.93 GHz, and the top bin Xeon D-1581, which had sixteen cores running at 1.9 GHz. As with other Xeon processors, the price/performance curves have flattened out.

And that curve, dear friends, represents Intels profits. We think Intel is able to make chips a lot cheaper over this same period of time, and the company spent the better part of a whole day last week bragging about this very fact. In great and glorious detail. We think Intel always suspected it would eventually get competition again in datacenter compute, and it has been making hay tons and tons of it while the sun was shining on its fields alone.

This is really smart. Even right up to the moment that it encourages intense competition that causes a compute price war, which we think is coming this year. It is better to make the money between 2010 and 2017 than not make it that is for sure.

As you can see from the top bin chart, the Xeon D really sticks out, and if offers about the same bang for the buck as the real Haswell and Broadwell Xeon E3 parts. Over the past eight years, performance has gradually trended up, but again, it has only slightly more than doubled for the real four-core Xeon E3 variants. (FYI: We have picked the Xeon E3 chips without a graphics processor in the package or on the die wherever possible to get the rawest X86 compute comparison.)

Now, lets step back from the top bin and see how it looks:

As you can see, the bang for the buck for these chips has fallen lower, but the performance and price/performance curves are not that different. And the Xeon D does not stick out so much like a sore thumb, either. (It is also interesting that there is not a Skylake or Kaby Lake Xeon D. Hmmmm. . . . )

And at the low end of the Xeon E3 lineup, the performance gains are more choppy and so is the price/performance:

In fact, after the Nehalems, Intel kept the price/performance dead steady except for the Broadwell E3-1265L and the Skylake E3-1265L, both of which are specialized low voltage parts that fit the performance profile we were looking at. You could draw any number of charts from this data, and you have our full permission to do so. Have fun.

Categories: Compute, Enterprise, HPC, Hyperscale

Tags: Intel, Kaby Lake, Skylake, Xeon, Xeon D, Xeon E3

Fujitsu Takes On IBM Power9 With Sparc64-XII From Mainframes to Deep Learning Clusters: IBMs Speech Journey

More:

Xeon E3: A Lesson In Moore's Law And Dennard Scaling - The Next Platform

There’s more to Moore’s Law than transistor counts – PC World

The implied increase in power is meant to transform into more-meaningful computing

Picture: Krbo (Flickr)

The PC industry has faithfully followed Moores Law since Gordon Moore first announced in 1965 that the density of transistors on a chip will double every year. What many people dont know is that Moores law was actually revised in 1975 to state that the density doubles every two years instead. Things have been a bit shaky of late and this law is stagnating. The announcement of Intels 8th Gen CPU, which is still being powered by a 14nm processor, effectively means that weve had the same chip density on the market for some five years. So is Moores law dead? Many say it is. I disagree.

Moores Law, from a purists point of view, has always been about computing power. But it isnt just about cramming more transistors into a space. Instead, its about making computing power affordable for the masses. Take a step back and think about the first man on the moon and colour TVs, then the progression to the personal computer what is the true meaning of Moores law?

I interpret these critical points as cost reduction, practical usage, and sub-components working as part of an overall system that is affordable, accessible, usable and purposeful.

Depending on who you ask within the industry, the interpretation of Moores Law differs. In Moores reasoning, it is a log-linear relationship between device complexity (higher circuit density at reduced cost) and time. Simply put, it is more-meaningful computing power at affordable prices. This triggers a secondary off-shoot (or complementary law) of Moores Law, which is Rocks Law, but we can save that one for another time.

So does that mean that Moores Law is akin to a moving goal post? It isnt about more transistors crammed into a set area; its a changing set of guidelines for people to make more meaningful systems. After all, what good is a processor on its own anyway?

To better understand this point, lets look quickly to PC history for some clues.

A quick historical recap (Source: Professor Wouter Den Haan, Chair LSE)

1970 mechanical calculators, repetitive retyping, file cards, filing cabinets

1970s. Memory typewriters, electronic calculators

1980s. PCs with word processing and spreadsheets

Late 1980s. E-mail, electronic catalogues, T-1 lines, proprietary software

Late 1990s. The web, search engines, e-commerce

2000-05 flat screens, airport check-in kiosks

By 2005 the revolution in business practices was almost over

From 2005 until now, offices use proprietary information, desktop computers and laptops in pretty much the same way they did post-1994. The current major tech companies and trends that we consider as recent champions of tech growth are Amazon (1994), Google (1998), Wikipedia (2001), iTunes (2001), BlackBerry (2003), Facebook (2004), the iPhone (2007) and the iPad (2010). The effect of the smartphone boom in 2007 and the tablet boom of 2010 I think needs to be discussed separately and I will leave that for another date. It is important to note however, that both inventions have not transformed the way business is done at its core unlike, say, email, the Internet and the PC, which have.

The smartphone and tablet are improvements on a category that have already done most of its disruptiveness in the office. The advent of smart watches and Fitbit-like devices can now be seen mostly as a fad that failed to go mainstream and it is something that I personally was challenged with launching into a few years ago. I would say our team did a great job in bypassing this category at the time (IOT and smart devices still have a place, just not for us right now). We also argued that tablets would go the way of Netbooks (remember them?) and it looks like they are not a category that will be able to stand by themselves. We have an onslaught of 2-in-1 devices coming and they seem to be a rather logical evolution of the notebook PC, taking the good elements from tablets and becoming a meaningful device for some users, but for now, lets go back to Moores law.

To put it really simply, Moores Law means something different to everyone. For me as a computer designer, it means making powerful computers that are affordable and useful. Hence, from that perspective, I think Moores Law is far from dead. Thus, as a team, we are going to carry on making more powerful computers that can do more, not just because they have double the density of transistors, but because this implied increase in power is meant to transform into more-meaningful computing. At the end of the day, wouldnt having better battery life, a faster hard drive, better screen and Wi-Fi be meaningless unless it amounted to better performance and a more positive value-added computing experience? If we could also keep it affordable, then Moores Law is well and truly alive and it continues to benefit us all.

A sustainability angle should probably also be added to it i.e. we need to think of the full product life-cycle of a device and its flow-on effects of its own ecosystem (e.g. cable, etc). After all, Moores law has changed before and it can be adjusted again.

Error: Please check your email address.

Tags notebooksintelVenom Computers

See the article here:

There's more to Moore's Law than transistor counts - PC World

We Need A Moore’s Law For Government – Investor’s Business Daily

You would need almost three of these 1985 Cray 2 supercomputers to equal the processing power of an iPhone. So if technology can keep getting smaller and more efficient, why can't government?

Big Government: This week, IBM showed that it could cram data into a single atom, part of the private sector's never-ending quest to make things smaller and more efficient. If only government would follow this model.

IBM Research announced on Wednesday that it was able to put a holmium atom a rare earth element on top of a magnesium oxide surface, and with "a pulse of electric current from the magnetized tip of scanning tunneling microscope ... flip the orientation of the atom's field between a 0 or 1," according to the journal Nature, which published the findings.

Right now, the device can store just two bits of data, and it has to be kept at a temperature close to absolute zero. But when this technology is inevitably scaled up and made commercially viable, it will drastically shrink storage sizes, since today it takes 100,000 atoms to store a single bit of data on a hard drive.

IBM (IBM) figures that with this technology, a device the size of a credit card could hold all 32 million songs contained in the iTunes library.

There's no telling when such devices would be commercialized, but what IBM's breakthrough tells us about the free market and about government is instructive.

Scientific breakthroughs like this occur because the private sector relentlessly pushes for greater efficiency. In the case of computing technology, it's resulted in "Moore's Law," named after Gordon Moore, who in 1965 noticed that the number of circuits that could be crammed onto an integrated circuit had been doubling every two years.

The result is nothing short of remarkable. The pocket-size iPhone, for example, has more than 2.7 times the processing power of the 1985 Cray Supercomputer, which took up, according to the brochure published at the time, "a mere 16 square feet of floor space."

This drive for efficiency occurs everywhere in a free-market economy from warehouses manned by robots, to the way McDonald's prepares its food, to the state-of-the-art navigation systems UPS trucks use to minimize delivery times.

But while the free market ceaselessly pushes things to get smaller and more efficient, the federal government continues to get bigger and less efficient.

Between 1985 and today, for example, the size of the federal government doubled, even after accounting for inflation, at a time when the U.S. population has increased by 34%.

That's just the spending side. Regulations have continued to pile up as well, without any concern about how they interact or overlap or reduce efficiency.

The result of this endless government growth has been a slower-growing private economy. From 1960 to 1988, real GDP increased at an average rate of 3.6%. In the years since, it has increased at an average rate of 2.5%. Since the last recession, the average real growth in GDP was less than 2.1%.

Can anyone honestly say that a bigger, more intrusive federal government has helped the economy, improved prosperity, or made things faster, better and more efficient?

President Trump came into office promising to "drain the swamp" of Washington, D.C. A better goal would be for him to follow IBM's lead and try to shrink the swamp until it becomes atomic sized.

RELATED:

70% Of U.S. Spending Is Writing Checks To Individuals

Why Government Spending Is An Obstacle To Growth

The Democrats Want To 'Invest' Big In Big Government

Trump News & Tweets

More:

We Need A Moore's Law For Government - Investor's Business Daily

Intel Corporation Stops Following Moore’s Law – Motley Fool

No matter how microprocessor giant Intel (NASDAQ:INTC) tries to spin it, Moore's Law is dead.

For those unfamiliar with Moore's Law, it essentially says that every two years or so, the company would develop new chip manufacturing technologies that allowed its product development teams to cram in far more transistors (and therefore features) into new products while keeping overall product costs flat compared to prior generations (implying a reduction in the cost per transistor).

Image source: Intel.

Based on this definition of Moore's Law, Intel is no longer following it, even as it tries to assure investors otherwise.

Indeed, not only have transitions to newer manufacturing technologies lengthened for Intel (already violating the "law"), but even when Intel does transition to those newer technologies, manufacturing yields are so poor that product costs go up generation over generation at the beginning of a new manufacturing technology ramp-up.

The "death" of Moore's Law doesn't spell gloom and doom for the personal computer industry, however. There are certainly ways to build more feature-rich and powerful computer chips without relying on transistors getting smaller and cheaper, but it requires a fundamentally different corporate mind-set than the one that Intel seems to have had in the past.

According to recent commentary from Intel executive Murthy Renduchintala, it looks like the company is finally learning to accept the death of Moore's Law.

"We're going to be focused more on the generation by the amount of performance increment it will give us," Renduchintala told PC World. "I don't think generations will be tagged to [manufacturing] node transitions."

Indeed, there are many ways for a chip company to deliver improved performance and features without needing to rely on transistors getting smaller.

For example, Intel's current seventh-generation Core processors are built on a performance-enhanced version of the company's original 14-nanometer technology, called 14-nanometer+. This technology doesn't provide an area reduction compared to the original 14-nanometer technology, but what that enhancement allowed Intel to do is deliver better performance and power efficiency than the sixth-generation Core processors built on the original 14-nanometer technology.

Going forward, it seems that Intel will continue to focus on trying to improve the underlying performance and power-efficiency characteristics of its pre-existing manufacturing technologies. Such improvements should allow the company to build increasingly better products without having to worry about the challenges associated with making transistors smaller (though it does need to worry about making those transistors better).

Furthermore, it's not all about manufacturing technology, either. Even without transitions to newer manufacturing technologies, Intel's chip design teams could make improvements to the underlying chip designs and architectures themselves.

Intel hasn't fully exploited that potential with its seventh-generation Core processors (the changes were mainly in the manufacturing technology an in reworking the circuit designs to use that technology), and it doesn't look like it will be doing much of that with its upcoming eighth-generation Core processors either (reliable leaks suggest that Intel will rely mainly on boosting processor core counts rather than making changes to the cores themselves).

However, in future generations -- now that the company is now explicitly planning around needing to use the same basic chip manufacturing technology (though with performance enhancements) -- Intel might have enough time to plan for more substantive chip design and architectural changes each year.

If Intel can manage to improve both its architectures and the performance characteristics of its manufacturing technologies at an annual clip, then the company should be in a good position to deliver significantly better products to its customers at a regular pace -- always a good thing.

Ashraf Eassa owns shares of Intel. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.

View original post here:

Intel Corporation Stops Following Moore's Law - Motley Fool

Moore’s Law can’t last foreverbut two small changes might mean your phone battery will – Quartz


Motley Fool
Moore's Law can't last foreverbut two small changes might mean your phone battery will
Quartz
Moore's theory was that the power of computers would double every 12 months while the cost of that technology would fall by 50% over the same time. And so, for 40 years, what became known as Moore's Law remained pretty rock-solid. But these are hard ...
Intel Corporation Stops Following Moore's Law -- The Motley FoolMotley Fool

all 2 news articles »

More:

Moore's Law can't last foreverbut two small changes might mean your phone battery will - Quartz

Preparing For The End Of Moore’s Law – AlleyWatch

Recently, in a packed room of over 100 professionals, Nuzha Yakoob shared with the hushed crowd how nature is the greatest innovator. Ms. Yakoobshowcased Festo Roboticsbionic zoo from elephant nose-inspired end-effectors to grippers modeled after chameleon tongues. My personal favorite are the robotic ants that collaborate over the cloudto accomplish specific jobs that would be impossible to do alone. As I left #RobotLabNYC, I musedthat biology holds the keys to unlocking the greatest challenges in computing and robotics.

Since 1965, Moores Law has been the pillar of the modern computing age, which has also led to the greatest growth ofrobotics. However, many are predicting we are reaching the tipping point of the number of silicon wafers that can be stacked on a single chip. The law is named after Gordon Moore, former CEO of Intel, who observed more than 50 years ago that transistors were shrinking so quickly that every year twice as many could fit onto a single chip, leading to exponential growth ofprocessing power. Moores Law was later adjusted to doubling every 18 months (At RobotLabNYC David Rose spoke about exponential growth as driver and disrupter of everything in todays connected world). However, many are observing a slowing of processing power between chip generations indicating that we could be years away from the end of Moores Law.

Its time to start planning for the end of Moores Law, and that its worth pondering how it will end, not just when, saysRobert Colwell, former director of the Microsystems Technology Office at the Defense Advanced Research Projects Agency(DARPA).

Semiconductors are in everything today, but if artificial intelligence is the future silicon may not be the most energy efficient means for compute power. Many designers are taking Ms. Yakoobs approach by looking at biology or the brain as a model of future networks. The best known exampleis aDARPA-funded program, calledSyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics), that is developing aneuromorphic machine technology that scales to biological levels. More simply stated, it is an attempt to build a new kind of computer with similar form and function to a mammalsbrain. The ultimate aim is to build an electronic microprocessor system that matches a brain in function, size, and power consumption. It could recreate 10 billion neurons, 100 trillion synapses, consume one kilowatt (same as a small electric heater), and occupy less than two liters of space.

Neuromorphic computing was originally conceived by Caltech Professor Carver Mead. In his 1990 IEEEpaper, Mead wrote that large-scale adaptive analog systems are more robust to component degradation and failure than are more conventional systems, and they use far less power. For this reason, adaptive analog technology can be expected to utilize the full potential of wafer-scale silicon fabrication. In translation, the idea is to utilize analog circuits to mimic neuro-biological architectures that we find in our nervous system:

Researchers at Stanford University and Sandia National Laboratories announced last month a different approach to mimicking a mammals brain by creating an artificial synapse.The artificial synapse, reported first in Nature Materials, mimics the way abrians synapses learn through crossingsignals. This means that actual processing of information creates energy, not the other way around by consuming energy to compute. Artificial synapses could provide huge energy savings over traditional computing, especially for deep learning applications.

According toAlberto Salleo, co-author of the paper, it works like a real synapse but its an organic electronic device that can be engineered. Its an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything thats been done before with inorganics.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing for voice-controlled interfaces like Alexa and autonomouscars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming computer hardware.

Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time. Instead of simulating a neural network, our work is trying to make a neural network, said Albertos co-author, Yoeri van de Burgt.

The artificial synapse is structured out of inexpensive organic materials, composed of hydrogen and carbon similar to a brains chemistry. The voltages applied to train the artificial synapse are also the same as those that move through human neurons. According to researchers, processing recognition has been in the upper ninetieth percentile.

The idea of using natural materials to run fuel cells is expanding into robotics with new novel microbial components.Researchers at the University of Rochester have taken a century old process on its heels by usingbacteria to generate an electrical current to power robots via a microbial fuel cells (or MFCs). The researchers plan to use MFCs in wastewater to consume bacteria as a way to powernew types of devices.

Weve come up with an electrode thats simple, inexpensive, and more efficient. As a result, it will be easy to modify it for further study and applications in the future, says researcher Peter Lamberg.

The new MFC uses a novel approach leveraging carbon as conductor of electricity.Until Lambergs discovery, most MFCs consisted of metal components or carbon felt that easily corrodes. His solution was to replace the metal parts with with paper coated with carbon paste, which is a simple mixture of graphite and mineral oil. Thecarbon paste-paper electrode is not only cost-effective and easy to prepare; it also outperforms traditional materials.

Jonathan Rossiter, Professor of Robotics at the University of Bristol, has been utilizing the MFC created by the University of Rochester in robots to clean polluted waterways.RossitersRow-bot feeds on the bacteria found in dirty water and uses it for propulsion. Row-bot is still in its conceptual stage, but the University of Bristol plans to develop swarms ofautonomous water robots that operate indefinitely in remote unstructured locations by scavenging its energy from the environment.

The work shows a crucial step in the development of autonomous robots capable of long-term self-power. Most robots require re-charging or refueling, often requiring human involvement, exclaimsRossiter.

We anticipate that the Row-bot will be used in environmental clean-up operations of contaminants, such as oil spills and harmful algal bloom, and in long term autonomous environmental monitoring of hazardous environments, for example those hit by natural and man-made disasters, added co-researcher, Hemma Philamore.

As we enter the new age of computing the rules have yet to bewritten. The lines betweenorganic and inorganic matter areblurring. At a certain point in time, it could be very possible that biologic-inspired machines become aspecies unto themselves.

Read more:

Preparing For The End Of Moore's Law - AlleyWatch

IBM quantum computers fledge into a real business – CNET

IBM's quantum computer looks nothing like a classical machine.

In a few years, the same quantum computing concepts that gave Albert Einstein the heebie-jeebies could help Amazon deliver your toothpaste faster.

That's because IBM, the company that surprised the world in 1989 by arranging 35 Xenon atoms into its own name, is launching its quantum computing business. Thirty-five years of research into the physics of the super-small is about to start paying its first dividends with actual customers.

"We will be providing access to quantum systems for selected industry partners starting this year," said Scott Crowder, who's leading the handoff of the quantum computing work from IBM Research to the IBM Systems product team.

A lot is riding on quantum computing. It offers fundamental breakthroughs that could help bring back the good old days of steadier computing progress. Moore's Law, the steady pace of chip improvements that's lasted for decades, has shrunk computing components so your smartwatch today is as powerful as a refrigerator-size mainframe last century. But some computing progress has stalled, which is why a 2017 laptop today doesn't get work done much faster than one from 2012.

"Moore's Law is struggling," Crowder said. But quantum computers will complement traditional computers, not replace them. "It'll do the pieces of the problem the classical computer can't."

Scott Crowder, chief technology officer of IBM's systems group, discusses the exponential advantages of quantum computing.

Quantum computers, which take advantage of the peculiar behavior and properties of atoms, are notoriously hard for even physicists to comprehend. But quantum computing is bubbling up at university and government labs, startups such as Rigetti and D-Wave, and the research arms of Microsoft, Intel and Google.

Quantum computing still is in its infancy, but even as it matures, you shouldn't expect a quantum-powered iPhone. IBM's quantum computer must be cooled a fraction of a degree away from absolute zero, a temperature colder than outer space, so its innermost niobium and aluminum components aren't perturbed by outside influences. The cooling alone takes days. That's why IBM customers will tap into quantum computers over the internet, not tuck them under their desks or plug them into the company data center.

What kinds of work are quantum computers good for? Early work will figure out how to out how to use quantum computers effectively and reliably -- kicking the tires, in effect.

After that, though, should come quantum chemistry work that could predict how molecules like new medicines interact; logistics to figure out the most efficient way to ship packages during the holiday shopping season; and new forms of security that rely on quantum physics instead of today's prevailing approach using math problems too hard to solve fast enough.

IBM's quantum computing lab.

One specific quantum chemistry example: factories need lots of expensive energy to make fertilizer, but microscopic bacteria do the same thing much more efficiently somehow. "We don't understand how that reaction occurs," said Jerry Chow, manager of IBM's experimental quantum computing team.

A quantum computer helps people understand what's really going on at the molecular level instead of fumbling around with trial-and-error experiments, Chow said.

The common thread for quantum computing tasks is rapidly analyzing a huge number of possible scenarios rapidly. That quantum computer strength also will be able to crack today's encryption -- by testing a colossal list of possible numbers to find which ones are mathematical keys that'll unlock private data.

Jerry Chow, manager of IBM's experimental quantum computing team

That famous quantum computer ability, though, is still "really far away," Crowder said. Meanwhile, governments and businesses are developing new quantum-proof algorithms.

The quantum era will add a thicket of new jargon to computing vocabulary. Brace yourself for cryogenic isolators, Josephson junctions and decoherence. For processing data, "and" and "or" logic gates from classical computing are joined by Hadamard gates and Pauli-X gates from quantum computing.

At their core, quantum computers store data with "qubits" -- quantum bits. Classical computers work by manipulating conventional bits -- small units of data that record either a 0 or 1. A single qubit, though, can store both 0 and 1 overlaid through a quantum peculiarity called superposition.

Superposition, combined with another quantum weirdness called entanglement, means that multiple qubits can be ganged together with exponential benefits to how much data they can store and process. A single qubit can store two states of information -- 0s and 1s -- while two qubits can store four states, three can store eight states, four can store sixteen and so on.

All that overlapping data stored in the same qubits lets quantum computers explore many possible solutions to a problem much faster than conventional computers -- finding which two integers multiply together into a huge number in encryption, say, or the fastest way to deliver a lot of packages.

But even then there are many practical difficulties. For example, the answer to a computing problem can be tucked away in one particular combination of 1s and 0s among many stored through superposition in entangled qubits. But the act of reading data from qubits "collapses" all the qubit states into a single collection of 1s and 0s -- and not necessarily the right one that holds the answer. So yes, it's complicated.

A 200mm silicon wafer houses IBM quantum-computing chips with 5 qubits apiece.

It'll be a long time before programmers learn the ropes for quantum computing. That's why IBM, Microsoft and Google offer simulated quantum computers. More ambitiously, IBM opened access to a website called Quantum Experience in 2016 that lets outsiders noodle with a real 5-qubit quantum computer.

A laptop can't simulate more than about 30 qubits, Crowder said. In the next few years, the IBM Q quantum computers will move to 50 qubits -- each a patch of niobium atoms hooked to its comrades with specialized aluminum wiring.

The real business breakthrough comes when companies can build quantum computing into their operations. That'll require about a thousand qubits, Crowley said. Another big threshold could arrive with about a million qubits, enough to overcome problems with errors undermining calculations. Those errors are more of a problem with quantum computing than classical computing.

"Fault tolerance is hard in a quantum system. You're going to need a lot of qubits," Crowley said. "That's probably at least a decade-plus away."

So maybe your toothpaste will arrive faster. But not anytime soon.

Excerpt from:

IBM quantum computers fledge into a real business - CNET

IBM i License Transfer Deal Comes To The Power S812 Mini – IT Jungle

March 6, 2017 Timothy Prickett Morgan

Back in the early days of the AS/400 midrange system, the processor, memory, networking, and disk and tape storage hardware embodied in the system was by far the most costly part of that system, far outweighing the cost of the systems software that ran atop it. We dont have the precise numbers at hand, but it was something like 85 percent hardware cost and 15 percent hardware cost.

Fast-forward a few decades, and the Moores Law improvements in every component in the hardware means that hardware is far less costly. But software doesnt have a Moores Law scaling; in fact, it is based on people and they cost more every year. And so software now represents a very large portion of the overall Power Systems-IBM setup these days. So customers are often in a position where they want newer, more powerful, and more capacious hardware but they cannot inexpensively move their existing IBM i and related system program licenses over to the new iron.

IBM has not cut prices for IBM i in recent years, as far as I know, and I have to guess because it is no longer possible to get list prices for anything in an easy fashion. Even partners have to use a configurator to get pricing, and it has to be tied to a particular customer and a particular set of serial numbers on machines for this information to be disseminated. (Again, this is as far as I know.) What I do know is that the list price system on IBMLink that I used for decades is no longer there. In any event, IBM i software has gotten a little more expensive over time when gauged in U.S. dollars and IBM is loath to cut prices. But every now and then it does something in special deals to make it a little less costly for customers with older machines to move to newer machines with regard to software pricing, and it has done it again with the new Power S812 Mini system that was announced for IBM i and AIX operating systems back on Valentines Day and that will be shipping on March 17.

Under the IBM i Power License Transfer Free promotion announced last week, which like the last such deal that was announced for earlier Power8-based systems in May 2016, offers customers a waiver on the fees that Big Blue charges to move an operating system. As has been the case for many years, IBM charges $5,000 per core to move an IBM i license from an old machine to a new one. This transfer fee seems absurd, as I have pointed out before, for a low-end system where the operating system only costs $2,995 per core. Or, more precisely, as I think it costs because that is what IBM used to charge per core in a P05 tier the last time I saw a list price on IBM i. I can see a $500 transfer fee for a license that has already been paid for, and I can make a very strong case for zero being a good fee in a world where IBM wants to get customers current. As detailed in the IBM i Processor and User Entitlement Transfer guide, IBM cushions the blow somewhat by saying that the $5,000 fee includes one year of Software Maintenance at no charge, which I think is funny for something that costs $5,000. And any Software Maintenance that you have paid for does not transfer from the old machine to the new one, also funny. But I have a warped sense of humor.

By the way, as you can see from that IBM i Processor and User Entitlement Transfer guide, the transfer fee is not a flat $5,000 across all classes of machines. That is just for a P05-class system that is transferring to another P05-class machine and within special groups organized by IBM. If you jump from Group 1 to Group 2 or Group 3 machines, the IBM i transfer fee is $18,000 per core, and from Group 4 to either Group 5 or Group 6 it costs $17,000 per core.

On February 28, IBM said in an announcement to business partners that it would allow customers to transfer IBM i licenses from the old machines to the new Power S812 for free, saving them the $5,000 per core charge. This is obviously a good thing, particularly if the Power S812 costs around 20 percent less than the Power S822 and Power S824 machines of similar single-core, light memory configuration. Every little bit helps. But 64 GB of memory cap on IBM i setups seems a bit light, perhaps even for a single 3 GHz core as the Power S812 machine has.

To take part in the IBM i Power License Transfer fee promotion, the old machine has to be installed for the past year or more and the new Power S812 machine has to ship between February 28 and August 31 of this year. Customers can apply this deal to up to five machines, but no more than that. As far as I know, this deal is only available in the United States and Canada, but obviously, customers all over the world should ask for the same treatment. And IBM similarly says that the transfer fee forgiveness only applies to machines moving in the same software tier as described by the guide above (not the IBM i software groups P05 through P60, which are different characterizations), but I think that anyone moving up to a higher group should at least ask for those $17,000 or $18,000 fees to be knocked down by $5,000 or abolished completely.

One more thing: Last May, when a similar IBM i license transfer deal was announced for Power S824 machines, IBM also waived the After License fee charges on Software Maintenance for customers who had let their support contracts lapse. Software Maintenance costs about 25 percent of the operating system licensing fees and is charged on an annual basis, and the After License charges can be in excess of a years worth of Software Maintenance fees, depending on how long it has lapsed. This can also be a large number, and if IBM wants customers with older machines to move ahead, then it is probably wise to offer this deal again. IBM has not done so here in early 2017, but nothing prevents customers upgrading to Power8 machines of any type from older gear to ask.

Ask and ye might receive.

IBM Gives The Midrange A Valentines Day (Processor) Card

More Insight Into The Rumored Power Mini System

Geared Down, Low Cost Power IBM i Box Rumored

IBM Cuts Core And Memory Pricing On Entry Power Iron

Entry Power8 Systems Get Express Pricing, Fat Memory

Reader Feedback On Power S814 Power8 Running IBM i

Four-Core Power8 Box For Entry IBM i Shops Ships Early

IBM Wheels And Deals To Get IBM i Shops Current

IBM i Shops Pay The Power8 Hardware Premium

IBM i Runs On Two Of Five New Power8 Machines

IBM Tweaks Power Systems-IBM i Licensing Deal

More Servers Added to the IBM i License Transfer Deal

More Software Pricing Carrots for IBM i Shops

Tags: Tags: IBM i, Power S812, Power Systems

The Missing RPG OA Puzzle Piece

Visit link:

IBM i License Transfer Deal Comes To The Power S812 Mini - IT Jungle

Best Buy Inc Co (BBY) Stock Dives Into the Retail Dumpster – Investorplace.com

Popular Posts: Recent Posts:

Best Buy (NYSE:BBY) announced slightly lower same-store sales during the holiday season, and investors dumped BBY stock in a hurry.

The drop was minor, less than 1%.But it was unexpected, and missed analyst estimates of a top-line gain of 0.5%.

For the quarter, Best Buy reported earnings of $607 million ($1.91 per share) on revenue of $13.48 billion. This compared with earnings of just $479 million($1.40) but revenue of $13.62 billion a year earlier. More importantly, while adjusted profits of $1.95 per share easily beat estimates of $1.67, revenues fell short of analysts expectations, also for $13.62 billion.

The revenue shortfall meant analysts threw Best Buy stock into the dumpster with retailers such as Macys Inc. (NYSE:M), with shares down almost 5% early Wednesday, to $42.40. During the Christmas season, on Dec. 8, the shares traded as high as $49.31.

Were the analysts right, or did they just offer smart investors a bargain? Is Amazon.com, Inc. (NASDAQ:AMZN) about to kill all retailers, or is this just a case of Moores Law in action?

Best Buy even trounced the higher earnings whisper number of $1.66 per share. But actual estimates were all over the map, with some very bearish about the companys ability to cut costs and others bullish on margins.

The Zacks Metric Model noted that BBY stock had beaten estimates for four quarters, and that shareholders had been rewarded with a 36% gain. In particular, Best Buy was posting big gains in online sales the so-called omni-channel approach and Zacks was expecting an upside surprise.

On the bottom line, of course, Best Buy delivered one. And considering the sizable beat, BBY shares shouldve rocketed higher.

But something elseis at play and that something is Moores Law.

Moores Law, which turned 50 in 2015, was described by Intel Corporation(NASDAQ:INTC) co-founder Gordon Moore as an expected increase in circuit density on silicon, doubling every 18 months ago as far as he could see, in 1966.

But, as I have been writing for many years now, Moores Law also turned traditional economics on its head.Moores Law is deflationary, and the deflationary impact grows with time, as integrated circuits are incorporated into more and more things, and as the impact is compounded by its use in various ways.

Next Page

See the original post here:

Best Buy Inc Co (BBY) Stock Dives Into the Retail Dumpster - Investorplace.com

Expanding the Scope of Verification – EE Journal

March 1, 2017

by Kevin Morris

Looking at the agenda for the 2017 edition of the annual DVCon - arguably the industrys premiere verification conference, one sees precisely what one would expect: tutorials, keynotes, and technical sessions focused on the latest trends and techniques in the ever-sobering challenge of functional verification in the face of the relentless advance of Moores Law.

For five decades now, our designs have approximately doubled in complexity every two years. Our brains, however, have not. Our human engineering noggins can still process just about the same amount of stuff that we could back when we left college, assuming we havent let ourselves get too stale. That means that the gap between what we as engineers can understand and what we can design has been growing at an exponential rate for over fifty years. This gap has always presented the primary challenge for verification engineers and verification technology. Thirty years ago, we needed to verify that a few thousand transistors were toggling the right ways at the right times. Today, that number is in the billions. In order to accomplish that and span the complexity gap, we need significant leverage.

The basic fundamentals of verification have persisted. Logic simulation has always been a mainstay, processing vectors of stimuli and expected results as fast and accurately as possible - showing us where our logic or timing has gone awry. Along the way, we started to pick up formal methods - giving us a way to prove that our functionality was correct, rather than trying to exhaustively simulate the important or likely scenarios. Parallel to those two avenues of advancement, we have been constantly struggling to optimize and accelerate the verification process. Weve proceduralized verification through standards-based approaches like UVM, and weve worked to accelerate the execution of our verification processes through technologies such as FPGA-based prototyping and emulation.

Taking advantage of Moores Law performance gains in order to accelerate the verification of our designs as they grow in complexity according to Moores Law is, as todays kids would probably say, Kinda meta. But Moores Law alone is not enough to keep up with Moores Law. Its the classic perpetual-motion conundrum. There are losses in the system that prevent the process from being perfectly self-sustaining. Each technology-driven doubling of the complexity of our designs does not yield a doubling of the computation that can be achieved. We gradually accrue a deficit.

And the task of verification is constantly expanding in other dimensions as well. At first, it was enough to simply verify that our logic was correct - that the 1s, 0s, and Xs at the inputs would all propagate down to the correct results at the outputs. On top of that, we had to worry about timing and temporal effects on our logic. As time passed, it became important to verify that embedded software would function correctly on our new hardware, and that opened up an entire new world of verification complexity. Then, people got cranky about manufacturing variation and how that would impact our verification results. And we started to get curious about how things like temperature, radiation, and other environmental effects would call our verification results into question.

Today, our IoT applications span vast interconnected systems from edge devices with sensors and local compute resources through complex communication networks to cloud-based computing and storage centers and back again. We need to verify not just the function of individual components in that chain, but of the application as a whole. We need to confirm not simply that the application will function as intended - from both a hardware and software perspective - but that it is secure, robust, fault-tolerant, and stable. We need to assure that performance - throughput and latency - are within acceptable limits, and that power consumption is minimized. This problem far exceeds the scope of the current notion of verification in our industry.

Our definition of correct behavior is growing increasingly fuzzy over time as well. For example, determining whether a processed video stream looks good is almost impossible from a programmatic perspective. The only reliable metric we have is human eyes subjectively staring at a screen. There are many more metrics for system success that have followed similar subjectivity issues. As our digital applications interact more and more directly and intimately with our human, emotional, analog world, our ability to boil verification down to a known set of zeros and ones slips ever farther from our grasp.

The increasing dominance of big data and AI-based algorithms further complicate the real-world verification picture. When the behavior of both hardware and software is too complex to model, it is far too complex to completely verify. Until some radical breakthrough occurs in the science of verification itself, we will have to be content to verify components and subsystems along fairly narrow axes and hope that confirming the quality of the flour, sugar, eggs, butter, and baking soda somehow verifies the deliciousness of the cookie.

There is no question that Moores Law is slowly grinding to a halt. And, while that halt may give us a chance to grab a breath from the Moores Law verification treadmill, it will by no means bring an end to our verification challenges. The fact is - if Moores Law ends today, we can already build systems far too complex to verify. If your career is in verification, and you are competent, your job security future looks pretty rosy.

But this may highlight a fundamental issue with our whole notion of verification. Verification somewhat tacitly assumes a waterfall development model. It presupposes that we design a new thing, then we verify our design, then we make and deploy the thing that we developed and verified. However, software development (and Id argue that the development of all complex hardware/software applications such as those currently being created for IoT) follows something much more akin to agile development - where verification is a continual ongoing process as the applications and systems evolve over time after their initial deployment.

So, lets challenge our notion of the scope and purpose of verification. Lets think about how verification serves our customers and our business interests. Lets re-evaluate our metrics for success. Lets consider how the development and deployment of products and services has changed the role of verification. Lets think about how our technological systems have begun to invert - where applications now span large numbers of diverse systems, rather than being contained within one. Moores Law may end, but our real work in verification has just begun.

EDA. Semiconductor.

More here:

Expanding the Scope of Verification - EE Journal

Taiwan Semiconductor Mfg. Co. Ltd. Says 5-Nano Tech to Enter Risk Production in 2019 – Motley Fool

Taiwan Semiconductor Manufacturing Company (NYSE:TSM), the largest pure-play contract chip manufacturer, reportedly said (per DigiTimes) that it intends to begin "risk production" of chips using its 5-nanometer technology in the "first half of 2019."

It usually takes about a year from risk-production start to mass-production start, so if TSMC achieves this timeline, it should begin volume production of chips using its 5-nanometer technology in the first half of 2020.

Image source: Intel.

What does this mean for TSMC investors and customers? Let's take a closer look.

Chip manufacturers have historically tried to advance their respective manufacturing technologies at a regular pace prescribed by what is commonly referred to as Moore's Law.

According to this "law," the number of transistors (chips are made up of millions, if not billions, of transistors these days) that can be crammed into a given chip area doubles roughly every 24-months.

Since TSMC plans to begin mass production on its 7-nanometer technology in the first half of 2018, mass production of its 5-nanometer technology -- which should deliver a doubling of transistor density compared to its 7-nanometer technology -- the company is essentially following Moore's Law (something that's becoming much more difficult to do these days).

TSMC needs to be able to deliver new manufacturing technologies at a rapid pace to satisfy the needs of its major customers. These newer technologies allow the company's customers to cram in more features and functionality all while improving power efficiency -- a clear win for performance/power sensitive applications like high-end smartphone and data center processors.

TSMC has said in the past that it aims to continue to grow its market share with each successive manufacturing technology; if the company can deliver on its stated timeline for 5-nanometer tech, then it should offer industry-leading chip density with this technology.

Investors must keep an eye on what TSMC's key rivals in the contract chip manufacturing business --Samsung (NASDAQOTH:SSNLF) and GlobalFoundries -- ultimately manage to deliver, but it seems to me that TSMC is right on track to continue to have compelling enough technology to maintain or grow market share in advanced technologies.

In the past, chipmakers have run into difficulties transitioning to newer manufacturing technologies -- this stuff is getting harder with each successive generation. As good as TSMC's recent track record has been vis-a-vis technology transitions, there's always going to be some level of execution risk here.

Fortunately, TSMC tends to be very transparent with its investors, offering regular technology development and manufacturing ramp updates on its quarterly earnings calls. So, if there are any issues/delays, then I would expect TSMC to disclose those to investors in a timely fashion.

For what it's worth, given the immense pressure that TSMC likely faces to keep Apple happy, I think that the odds are extremely good that we will see iPhone models launched in 2020 that will be powered by chips manufactured in TSMC's 5-nanometer technology.

Ashraf Eassa has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Apple. The Motley Fool has the following options: long January 2018 $90 calls on Apple and short January 2018 $95 calls on Apple. The Motley Fool has a disclosure policy.

Continued here:

Taiwan Semiconductor Mfg. Co. Ltd. Says 5-Nano Tech to Enter Risk Production in 2019 - Motley Fool

AFIS reach new levels as biometrics advance at Moore’s Law pace – SecureIDNews

Florida county solves cold case with new Automated Fingerprint Identification System

Old criminals beware. The chance that you will be identified from long-ago collected evidence is growing exponentially as biometric systems and Automated Fingerprint Identification Systems (AFIS) improve. Case in point, Pinellas County, Florida, where a man was arrested on Feb. 17, 2017 for a crime committed 25 years prior.

At the time of the 1992 sexual attack, the Tampa Bay Reporter explains that latent fingerprint evidence was collected from the scene. The prints were run through the AFIS used by the Sheriffs Office at that time to no avail.

Fast forward to July 2016. A new AFIS from MorphoTrak had recently replaced the prior decades-old system used by the county. Latent print examiners once again processed the 25-year-old fingerprint evidence collected in the cold case. Thanks to the vastly improved matching algorithms and architecture, the new AFIS hit on a single suspect.

Several months later, the man was arrested, charged with sexual battery and taken to the jail.

In Pinellas County, the new AFIS system is returning more than 230 hits on latent print checks each month, an increase of more than 50% from the prior generation technology.

That is impressive, but is it a mere glimpse of things to come?

Biometric systems for AFIS solutions, border control, identity management, authentication, mobile ID, digital identity, etc. are advancing at a Moores Law style rate. Each incarnation breeds more significant advances than its predecessor, and the incarnations or generations are coming more and more rapidly.

We are in the midst of an unprecedented rise in the acceptance of biometrics. With acceptance comes investment. And this will result in massive and spiraling gains in all areas required for exponential growth: intellectual pursuit, financial investment, technical gains chip, software, processing, cloud, et al, government and standards interest, and more.

It is very likely that the same level of improvement seen in Pinellas Countys AFIS between 1992 and 2016 will again be seen between 2016 and 2020. That next factor of advancement could take just months post 2020. And the sky is the limit from there.

New AFIS solutions hold the promise of far better identification and accuracy, streamlined human inputs and overall efficiencies, and ever-increasing processing power, storage and cross-system sharing.

One day (soon), criminals may be unable to hide.

If I had committed a past crime and left fingerprint evidence, Id be preparing for relocation to somewhere without extradition as the opportunity to avoid identification will be slimming rapidly.

Read the rest here:

AFIS reach new levels as biometrics advance at Moore's Law pace - SecureIDNews