Daily Archives: February 24, 2024

AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

Posted: February 24, 2024 at 12:01 pm

AI developments are happening pretty fast. If you don't stop and look around once in a while, you could miss them.

Fortunately, I'm looking around for you and what I saw this week is that competition between OpenAI, maker of ChatGPT and Dall-E, and Google continues to heat up in a way that's worth paying attention to.

A week after updating its Bard chatbot and changing the name to Gemini, Google's DeepMind AI subsidiary previewed the next version of its generative AI chatbot. DeepMind told CNET's Lisa Lacy that Gemini 1.5 will be rolled out "slowly" to regular people who sign up for a wait list and will be available now only to developers and enterprise customers.

Gemini 1.5 Pro, Lacy reports, is "as capable as" the Gemini 1.0 Ultra model, which Google announced on Feb. 8. The 1.5 Pro model has a win rate -- a measurement of how many benchmarks it can outperform -- of 87% compared with the 1.0 Pro and 55% against the 1.0 Ultra. So the 1.5 Pro is essentially an upgraded version of the best available model now.

Gemini 1.5 Pro can ingest video, images, audio and text to answer questions, added Lacy. Oriol Vinyals, vice president of research at Google DeepMind and co-lead of Gemini, described 1.5 as a "research release" and said the model is "very efficient" thanks to a unique architecture that can answer questions by zeroing in on expert sources in that particular subject rather than seeking the answer from all possible sources.

Meanwhile, OpenAI announced a new text-to-video model called Sora that's capturing a lot of attention because of the photorealistic videos it's able to generate. Sora can "create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions." Following up on a promise it made, along with Google and Meta last week, to watermark AI-generated images and video, OpenAI says it's also creating tools to detect videos created with Sora so they can be identified as being AI generated.

Google and Meta have also announced their own gen AI text-to-video creators.

Sora, which means "sky" in Japanese, is also being called experimental, with OpenAI limiting access for now to so-called "red teamers," security experts and researchers who will assess the tool's potential harms or risks. That follows through on promises made as part of President Joe Biden's AI executive order last year, asking developers to submit the results of safety checks on their gen AI chatbots before releasing them publicly. OpenAI said it's also looking to get feedback on Sora from some visual artists, designers and filmmakers.

How do the photorealistic videos look? Pretty realistic. I agree with the The New York Times, which described the short demo videos -- "of wooly mammoths trotting through a snowy meadow, a monster gazing at a melting candle and a Tokyo street scene seemingly shot by a camera swooping across the city" -- as "eye popping."

The MIT Review, which also got a preview of Sora, said the "tech has pushed the envelope of what's possible with text-to-video generation." Meanwhile, The Washington Post noted Sora could exacerbate an already growing problem with video deepfakes, which have been used to "deceive voters" and scam consumers.

One X commentator summarized it this way: "Oh boy here we go what is real anymore." And OpenAI CEO Sam Altman called the news about its video generation model a "remarkable moment."

You can see the four examples of what Sora can produce on OpenAI's intro site, which notes that the tool is "able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions."

But Sora has its weaknesses, which is why OpenAI hasn't yet said whether it will actually be incorporated into its chatbots. Sora "may struggle with accurately simulating the physics of a complex scene and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right."

All of this is to remind us that tech is a tool -- and that it's up to us humans to decide how, when, where and why to use that technology. In case you didn't see it, the trailer for the new Minions movie (Despicable Me 4: Minion Intelligence) makes this point cleverly, with its sendup of gen AI deepfakes and Jon Hamm's voiceover of how "artificial intelligence is changing how we see the worldtransforming the way we do business."

"With artificial intelligence," Hamm adds over the minions' laughter, "the future is in good hands."

Here are the other doings in AI worth your attention.

Twenty tech companies, including Adobe, Amazon, Anthropic, ElevenLabs, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X, agreed at a security conference in Munich that they will voluntarily adopt "reasonable precautions" to guard against AI tools being used to mislead or deceive voters ahead of elections.

"The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says, according to NPR. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

But the agreement is "largely symbolic," the Associated Press reported, noting that "reasonable precautions" is a little vague.

"The companies aren't committing to ban or remove deepfakes," the AP said. "Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide 'swift and proportionate responses' when that content starts to spread."

AI has already been used to try to trick voters in the US and abroad. Days before the New Hampshire presidential primary, fraudsters sent an AI robocall that mimicked President Biden's voice, asking them not to vote in the primary. That prompted the Federal Communications Commission this month to make AI-generated robocalls illegal. The AP said that "Just days before Slovakia's elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media."

"Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own," Nick Clegg, president of global affairs for Meta, told the Associated Press in an interview before the summit.

Over 4 billion people are set to vote in key elections this year in more than 40 countries,. including the US, The Hill reported.

If you're concerned about how deepfakes may be used to scam you or your family members -- someone calls your grandfather and asks them for money by pretending to be you -- Bloomberg reporter Rachel Metz has a good idea. She suggests it may be time for us all to create a "family password" or safe word or phrase to share with our family or personal network that we can ask for to make sure we're talking to who we think we're talking to.

"Extortion has never been easier," Metz reports. "The kind of fakery that used to take time, money and technical know-how can now be accomplished quickly and cheaply by nearly anyone."

That's where family passwords come in, since they're "simple and free," Metz said. "Pick a word that you and your family (or another trusted group) can easily remember. Then, if one of those people reaches out in a way that seems a bit odd -- say, they're suddenly asking you to deliver 5,000 gold bars to a P.O. Box in Alaska -- first ask them what the password is."

How do you pick a good password? She offers a few suggestions, including using a word you don't say frequently and that's not likely to come up in casual conversations. Also, "avoid making the password the name of a pet, as those are easily guessable."

Hiring experts have told me it's going to take years to build an AI-educated workforce, considering that gen AI tools like ChatGPT weren't released until late 2022. So it makes sense that learning platforms like Coursera, Udemy, Udacity, Khan Academy and many universities are offering online courses and certificates to upskill today's workers. Now the University of Pennsylvania's School of Engineering and Applied Science said it's the first Ivy League school to offer an undergraduate major in AI.

"The rapid rise of generative AI is transforming virtually every aspect of life: health, energy, transportation, robotics, computer vision, commerce, learning and even national security," Penn said in a Feb. 13 press release. "This produces an urgent need for innovative, leading-edge AI engineers who understand the principles of AI and how to apply them in a responsible and ethical way."

The bachelor of science in AI offers coursework in machine learning, computing algorithms, data analytics and advanced robotics and will have students address questions about "how to align AI with our social values and how to build trustworthy AI systems," Penn professor Zachary Ives said.

"We are training students for jobs that don't yet exist in fields that may be completely new or revolutionized by the time they graduate," added Robert Ghrist, associate dean of undergraduate education in Penn Engineering.

FYI, the cost of an undergraduate education at Penn, which typically spans four years, is over $88,000 per year (including housing and food).

For those not heading to college or who haven't signed up for any of those online AI certificates, their AI upskilling may come courtesy of their current employee. The Boston Consulting Group, for its Feb. 9 report, What GenAI's Top Performer Do Differently, surveyed over 150 senior executives across 10 sectors. Generally:

Bottom line: companies are starting to look at existing job descriptions and career trajectories, and the gaps they're seeing in the workforce when they consider how gen AI will affect their businesses. They've also started offering gen AI training programs. But these efforts don't lessen the need for today's workers to get up to speed on gen AI and how it may change the way they work -- and the work they do.

In related news, software maker SAP looked at Google search data to see which states in the US were most interested in "AI jobs and AI business adoption."

Unsurprisingly, California ranked first in searches for "open AI jobs" and "machine learning jobs." Washington state came in second place, Vermont in third, Massachusetts in fourth and Maryland in fifth.

California, "home to Silicon Valley and renowned as a global tech hub, shows a significant interest in AI and related fields, with 6.3% of California's businesses saying that they currently utilize AI technologies to produce goods and services and a further 8.4% planning to implement AI in the next six months, a figure that is 85% higher than the national average," the study found.

Virginia, New York, Delaware, Colorado and New Jersey, in that order, rounded out the top 10.

Over the past few months, I've highlighted terms you should know if you want to be knowledgeable about what's happening as it relates to gen AI. So I'm going to take a step back this week and provide this vocabulary review for you, with a link to the source of the definition.

It's worth a few minutes of your time to know these seven terms.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an AI chatbot. For example, you may assume it's kind or cruel based on its answers, even though it isn't capable of having emotions, or you may believe the AI is sentient because it's very good at mimicking human language.

Artificial general intelligence (AGI): A description of programs that are as capable as -- or even more capable than -- than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Generative artificial intelligence (gen AI): Technology that creates content -- including text, images, video and computer code -- by identifying patterns in large quantities of training data and then creating original material that has similar characteristics.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that aren't yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze or make up facts about events that aren't in its training data. It's not fully understood why this happens but can arise from sparse data, information gaps and misclassification.

Large language model (LLM): A type of AI model that can generate human-like text and is trained on a broad dataset.

Prompt engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAI's ChatGPT, describing the tasks users feed into the algorithm. (e.g. "Give me five popular baby names.")

Temperature: In simple terms, model temperature is a parameter that controls how random a language model's output is. A higher temperature means the model takes more risks, giving you a diverse mix of words. On the other hand, a lower temperature makes the model play it safe, sticking to more focused and predictable responses.

Model temperature has a big impact on the quality of the text generated in a bunch of [natural language processing] tasks, like text generation, summarization and translation.

The tricky part is finding the perfect model temperature for a specific task. It's kind of like Goldilocks trying to find the perfect bowl of porridge -- not too hot, not too cold, but just right. The optimal temperature depends on things like how complex the task is and how much creativity you're looking for in the output.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

View original post here:

AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET

Posted in Artificial General Intelligence | Comments Off on AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI – Grit Daily

Posted: at 12:01 pm

Denver, USA, February 23rd, 2024, Chainwire

The Decentralized AGI Summit, organized by Sentient and Symbolic Capital, will bring together top thought leaders in Decentralized AI like Vitalik Buterin, Sandeep Nailwal, Illia Polosukhin, and Sreeram Kannan.

As the development of artificial general intelligence (AGI) systems accelerates, there are growing concerns that centralized AI controlled by a small number of actors poses a major threat to humanity. The inaugural Decentralized AGI Summit will bring together top experts in AI and blockchain like Vitalik Buterin, Sandeep Nailwal, Illia Polosukhin, Sreeram Kannanm, and more, to explore how decentralized, multi-stakeholder governance models enabled by blockchain technology can help make the development of AGI safer, more transparent and aligned with the greater good.

The rapid acceleration of centralized AI and its integration into everyday life has led humanity to a crossroads between two future worlds, says Sandeep Nailwal. On the one hand, we have the choice of a Closed World. This world is controlled by few, closed-source models run by massive mega corporations. On the other hand, we have the choice of an Open World. In this world, models are default open-source, inference is verifiable, and value flows back to the stakeholders. The Open World is the world we want to live in, but it is only possible by leveraging blockchain to make AI more transparent and just.

The Decentralized AGI Summit will take place on Monday, February 26th from 3-9pm MST. It is free and open to the public to attend at: https://decentralizedagi.org/.

We are excited to help facilitate this important discussion around the development of safe and ethical AGI systems that leverage decentralization and multi-stakeholder governance, said Kenzi Wang, Co-Founder and General Partner at Symbolic Capital. Bringing luminaries across both the AI and web3 domains together will help push forward thinking on this critical technological frontier.

Featured keynote speakers include:

Vitalik Buterin, Co-Founder of Ethereum Foundation

Sandeep Nailwal, Co-Founder of Polygon Labs

Illia Polosukhin, Co-Founder of Near Foundation

Sreeram Kannan, Founder of Eigenlayer

Topics will span technical AI safety research, governance models for AGI systems, ethical considerations, and emerging use cases at the intersection of AI and blockchain. The summit aims to foster collaboration across academic institutions, industry leaders and the decentralized AI community.

For more details and to register, visit https://decentralizedagi.org/.

About Sentient

Sentient is building a decentralized AGI platform. Sentients team is comprised of leading web3 founders, builders, researchers, and academics who are committed to creating trustless and open artificial intelligence models.

Learn more about Sentient here: https://sentient.foundation/

About Symbolic Capital

Symbolic Capital is a people-driven investment firm supporting the best web3 projects globally. Our team has founded and led some of the most important blockchain companies in the world, and we leverage this background to provide unparalleled support to the companies in our portfolio.

Learn more about Symbolic Capital here: https://www.symbolic.capital/

Sam Lehman[emailprotected]

Original post:

Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI - Grit Daily

Posted in Artificial General Intelligence | Comments Off on Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI – Grit Daily

Fire damages Waveland home – Journal Review

Posted: at 12:01 pm

Multiple fire departments responded to the scene on Monday in Waveland.

Journal Review

WAVELAND Firefighters from serveral departments responded Monday to a single-story house fire in the 800 block of West Main Street in Waveland.

The fire was reported at 10:28 a.m. Firefighters from Waveland, New Market, Marshall and Russellville answered the call. Also on scene were medics with Crawfordsville EMS and deputies with the Montgomery County Sheriffs Office.

The property owner, Brenda Allen, was not at home when the fire was discovered. She later arrived on the scene.

The cause and origin of the fire were not imediately available. Red Cross also offered assistance.

Crews cleared the scene shortly before 1 p.m. Monday.

Continued here:

Fire damages Waveland home - Journal Review

Posted in Waveland | Comments Off on Fire damages Waveland home – Journal Review

The World’s 16 Most Expensive Private Islands to Visit – Yahoo Finance

Posted: at 12:01 pm

The World's 16 Most Expensive Private Islands to Visit  Yahoo Finance

Read this article:

The World's 16 Most Expensive Private Islands to Visit - Yahoo Finance

Posted in Private Islands | Comments Off on The World’s 16 Most Expensive Private Islands to Visit – Yahoo Finance

Cruise lines will pay new tax on private islands in the Bahamas – FOX 35 Orlando

Posted: at 12:01 pm

Cruise lines will pay new tax on private islands in the Bahamas  FOX 35 Orlando

Read the original here:

Cruise lines will pay new tax on private islands in the Bahamas - FOX 35 Orlando

Posted in Private Islands | Comments Off on Cruise lines will pay new tax on private islands in the Bahamas – FOX 35 Orlando

Michael Stern’s JDS Development Sued Over Private Air Travel – The Real Deal

Posted: at 12:01 pm

Developer says complaint lacks merit, expects a judge to dismiss

JDS Development 's Michael Stern (JDS Development, Aktug Ates via Wikimedia Commons, Getty)

Michael Sterns JDS Development may be headed for a hard landing over some flight bills.

The company was sued this week by a charter flight company that alleged the developer failed to pay for private flights that jetted Stern and his associates between Florida, New York and the Caribbean Islands in 2022 and 2023.

The developer owes more than $1.2 million to Hawthorne Finance Holdings, which owned the New York-based charter service ExcelAire until it sold the company last summer. Hawthorne retained ownership of the businesss accounts receivable as part of the sale agreement, and is now trying to collect what it calls unpaid bills in court.

The complaint is wholly without merit and we expect it to be dismissed, said a legal representative of JDS. Stern declined to comment.

Stern is the ambitious developer behind the Brooklyn Tower, now the tallest building in New York City outside of Manhattan, the Steinway Tower on Billionaires Row and 888 Brickell, a supertall Miami condo tower to be branded by Dolce & Gabbana.

It was a month after the fashion company was reported to be a partner in his Brickell residences that Stern flew from Miami to St. Barts and St. Martin, according to Hawthornes legal filings. The company claims JDS has tried to escape paying for private travel to the pricey Caribbean vacation destinations.

In 2022, as JDS was fighting a legal battle to win approval to build a 500,000-square-foot residential tower at 247 Cherry Street in Manhattans Two Bridges neighborhood, Stern flew from Miami to Teterboro Airport and stayed in New York for about a week, according to a list of travel costs that Hawthorne alleges went unpaid. Other expenses include fuel costs, hangar fees, a per diem for pilots and aircraft maintenance.

Sterns preferred transport was an Embraer Legacy 600, a midsize jet that can fly up to 14 people and cross 3,400 nautical miles at a cruising speed of about 500 miles per hour. With two Rolls Royce engines, the plane can reach as far as London, Alaska or Peru from New York.

In an exchange of emails between Hawthorne and Stern, the developer identified $68,000 in duplicative charges while going thru the bills, but he apparently ceased communication with the company without paying the balance it alleges he owes.

Representatives for Hawthorne did not return requests for comment. The charter flight company ExcelAire now operates under different ownership as Executive Fliteways.

More:

Michael Stern's JDS Development Sued Over Private Air Travel - The Real Deal

Posted in Private Islands | Comments Off on Michael Stern’s JDS Development Sued Over Private Air Travel – The Real Deal

Amritpal Singhs mother, kin of other NSA detainees go on hunger strike, want them to be shifted to Punjab jail – The Tribune India

Posted: at 12:01 pm

Amritpal Singhs mother, kin of other NSA detainees go on hunger strike, want them to be shifted to Punjab jail  The Tribune India

See the original post here:
Amritpal Singhs mother, kin of other NSA detainees go on hunger strike, want them to be shifted to Punjab jail - The Tribune India

Posted in NSA | Comments Off on Amritpal Singhs mother, kin of other NSA detainees go on hunger strike, want them to be shifted to Punjab jail – The Tribune India

Rob Joyce leaving NSA at the end of March – CyberScoop

Posted: at 12:01 pm

Rob Joyce, the veteran National Security Agency official, is retiring at the end of March after 34 years at the spy agency, leaving the federal government without one of its most experienced cybersecurity experts going into a critical election year and amid warnings that China is carrying out unprecedented cyber operations against U.S. critical infrastructure.

In recent years, Joyce has established himself as an unusually public-facing official at the historically secretive NSA. In his current role as head of the agencys Cybersecurity Directorate, Joyce has pushed a spy agency once known as No Such Agency to improve intelligence-sharing on cyberthreats and better collaborate with critical infrastructure providers and industry.

Robs leadership of the agencys critical cybersecurity mission has been exemplary, NSA Director General Timothy D. Haugh said in a statement. His vision and development of the CSD team and its capacities ensures that NSAs cybersecurity mission is healthy and will continue to be successful in protecting our allies and national systems well into the future.

David Luber, deputy director of the CSD and a 36-year NSA veteran, will take over for Joyce.

At a 2022 CyberScoop event in Washington, Joyce spoke about the need for the NSA to shift away from its historical secrecy and instead make available the insights about what we know without putting at risk how we know it. Thats really an inflection point that lets us get to more prolific, more extensive and more closely sharing for operational outcomes.

It doesnt do anybody any good if we know a thing and dont do something, Joyce continued. Doing is really the focus in the cybersecurity area. And if youve got secrets and understanding and you dont operationalize those, they dont count.

Joyce spoke frequently in recent years about the threats that Chinese hackers posed to the U.S., particularly with regard to critical infrastructure.

During an appearance last month at the International Conference on Cyber Security at Fordham University, however, Joyce sounded a relatively optimistic note on how the NSA and other agencies have successfully leveraged artificial intelligence and machine learning to better combat Chinese hacking operations that might have previously side-stepped the more tried-and-true defensive approaches.

Joyce joined the NSA in 1989 and served in multiple roles over his nearly three and a half decades at Fort Meade. He led the agencys elite hacking unit Tailored Access Operations between 2013 and 2017. During the Trump administration, he served a stint in the White House as a senior cybersecurity advisor before returning to the NSA, including as a special liaison officer at the U.S. Embassy in London.

I am honored to have served for over 34 years at the National Security Agency, Joyce said in a statement. It has been a privilege to lead the nations most talented and dedicated team of cybersecurity professionals. Making a difference in the security of the nation is truly an honor.

Read more from the original source:
Rob Joyce leaving NSA at the end of March - CyberScoop

Posted in NSA | Comments Off on Rob Joyce leaving NSA at the end of March – CyberScoop

NSA cyber director to step down after 34 years of service – Nextgov/FCW

Posted: at 12:01 pm

NSA Cybersecurity Director Rob Joyce will retire at the end of March after 34 years of service, the agency announced Tuesday.

Joyce has led NSAs Cybersecurity Directorate since 2021, working with other government and intelligence community officials on protecting U.S. critical infrastructure and other key assets amid ever growing fears about nation-state cyber threats. David Luber, the Cybersecurity Directorates second-in-command, will take his place.

An outspoken agency official who would often engage with members of the media about the state of play in cyber policy, he played a critical role in crafting a Trump-era executive order that worked to establish a greater accountability culture among U.S. cybersecurity and IT leaders.

I am honored to have served for over 34 years at the National Security Agency, Joyce said in a written announcement. It has been a privilege to lead the nations most talented and dedicated team of cybersecurity professionals. Making a difference in the security of the nation is truly an honor.

His departure comes at a time when American security officials are on high alert with a presidential election looming in November, as well as several warnings issued by NSA and other intelligence partners this year on the complex attempts from hackers backed by China, Russia and others seeking to sabotage U.S. infrastructure and other centralized economic systems.

The news also comes as the NSA and other intelligence community partners are urging Congress to reauthorize a controversial spying power known as Section 702 that the agency argues is an absolute necessity for U.S. national security, with the tool reportedly having been recently used to detect emerging Russian nuclear capabilities in space.

Joyce has also frequently warned of hackers attempts to leverage new and emerging technologies, like generative artificial intelligence chatbots that researchers have said can help enhance or optimize malware deployment.

Before CSD, Joyce worked in London as NSAs cytological policy lead and held positions in the National Security Council, serving as a cybersecurity coordinator to the Oval Office between 2017 and 2018. Between 2013 and 2017, he led the clandestine Tailored Access Operations unit within NSA responsible for foreign cyber warfare and intelligence gathering operations.

Robs leadership of the agencys critical cybersecurity mission has been exemplary, NSA and Cyber Command leader Gen. Timothy Haugh said. His vision and development of the CSD team and its capacities ensures that NSAs cybersecurity mission is healthy and will continue to be successful in protecting our allies and national systems well into the future.

Link:
NSA cyber director to step down after 34 years of service - Nextgov/FCW

Posted in NSA | Comments Off on NSA cyber director to step down after 34 years of service – Nextgov/FCW

Behind Khattar govts U-turn on NSA against farm leaders, fear of rural blowback, Congress gain – The Indian Express

Posted: at 12:01 pm

Behind Khattar govts U-turn on NSA against farm leaders, fear of rural blowback, Congress gain  The Indian Express

Read this article:
Behind Khattar govts U-turn on NSA against farm leaders, fear of rural blowback, Congress gain - The Indian Express

Posted in NSA | Comments Off on Behind Khattar govts U-turn on NSA against farm leaders, fear of rural blowback, Congress gain – The Indian Express