Lesson

Python-programming

Python beats Java to become the second-most popular programming language

For the first time in two decades, Python beat Java to become the second-most popular programming language this month, according to the TIOBE Programming Community Index. C language retains pole position.

Advanced AI: Deep Reinforcement Learning

The index ratings are based on the number of skilled engineers worldwide, courses, and third-party vendors. Ratings are calculated using popular search engines including Google, Bing, Yahoo!, Baidu, Wikipedia, and Amazon.

The TIOBE (“The Importance Of Being Earnest”)Index ranks the popularity of programming languages based on the number of skilled engineers worldwide, courses, and third-party vendors. Ratings are calculated using popular search engines including Google, Bing, Yahoo!, Wikipedia, Baidu and Amazon. 

Python’s recent surge in popularity can be attributed to rise in data mining, artificial intelligence (AI) and numerical computing, TIOBE stated.

Programming skills are required everywhere, and Python is the go-to option for non-software engineers these days, since it is easy to learn with fast edit cycles and smooth deployment.

aws-CERTIFIED

C++ and C# continue to rank fourth and fifth in the index.

Last month, Python was ranked third in terms of popularity with the largest year-over-year growth among the top 50 programming languages. Java came second with the largest negative year-over-year growth among the top 50 languages.

GPT-3-AI-Model

Microsoft Obtains Exclusive License for GPT-3 AI Model

Microsoft announced an agreement with OpenAI to license OpenAI’s GPT-3 deep-learning model for natural-language processing (NLP). Although Microsoft’s announcement says it has “exclusively” licensed the model, OpenAI will continue to offer access to the model via its own API.

AI-deep-learning

Microsoft announced an agreement with OpenAI to license OpenAI’s GPT-3 deep-learning model for natural-language processing (NLP). Although Microsoft’s announcement says it has “exclusively” licensed the model, OpenAI will continue to offer access to the model via its own API.

Microsoft CTO Kevin Scott wrote about the agreement on Microsoft’s blog. The deal builds on an existing relationship between the two organizations, which includes a partnership in building a supercomputer on Microsoft’s Azure cloud platform. OpenAI recently used that supercomputer to train GPT-3, which at 175 billion parameters is one of the largest NLP deep-learning models trained to date. Scott said the licensing of GPT-3 will:

[Allow] us to leverage its technical innovations to develop and deliver advanced AI solutions for our customers, as well as create new solutions that harness the amazing power of advanced natural language generation.

GPT-3 is the third iteration of OpenAI’s Generative Pre-Trained Transformer model. The original GPT model was released in 2018 and contained 117 million parameters. For the next iteration, GPT-2, OpenAI scaled up the model more than 10x, to 1.5 billion parameters. Because the text generated by GPT-2 could often be as “credible” as text written by humans, OpenAI at first declined to release the full model, citing potential for misuse in generating “deceptive, biased, or abusive language at scale.” However, by November 2019, OpenAI had seen “no strong evidence of misuse” and decided to release the model.

In July 2019, Microsoft and OpenAI announced a partnership, which included a $1 billion investment from Microsoft, to “jointly build new Azure AI supercomputing technologies.” OpenAI also agreed to run its services on Azure and to make Microsoft its “preferred partner for commercializing new AI technologies.” During its Build conference this May, Microsoft showcased the supercomputer built for OpenAI on its Azure cloud platform: “a single system with more than 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server.”

GPT-3, announced earlier this year, was a 100x scale-up of GPT-2 and set new state-of-the-art results on several NLP tasks. The training dataset contained nearly half a trillion words. Training the model on the Azure supercomputer consumed “several thousand petaflop/s-days of compute” and is estimated to have cost from $4.6 million to $12 million. As with GPT-2, OpenAI has not released the trained model; however, OpenAI did release a limited-access web API for developers to make calls to the model from their apps.

The licensing deal with Microsoft is the latest of several recent moves by OpenAI to monetize their technology. Originally founded as a non-profit, OpenAI launched a new “hybrid of a for-profit and nonprofit” or “capped-profit” called OpenAI LP in March 2019. The goal of the new company was to “raise investment capital and attract employees with startup-like equity.” OpenAI’s API page contains an FAQ section that defends its commercial products as “one of the ways to make sure we have enough funding to succeed.” While the terms of the Microsoft license have not been disclosed, OpenAI claims that it has “no impact” on users of OpenAI’s API, who can “continue building applications…as usual.”

With the license agreement being touted as “exclusive,” and given OpenAI’s past reluctance to release their trained models, many commenters have joked that the company should change its name to “ClosedAI.” One Hacker News reader questioned the long-term commercial viability of GPT-3:

google-ml-ai

Anyone else feel like this idea of commercializing GPT-3 is bound to go nowhere as the research community figures out how to replicate the same capabilities in smaller cheaper open models within a few months or even a year?

The OpenAI API is currently in beta, with a waitlist for gaining access. The 1.5-billion parameter GPT-2 model is available on GitHub. (Source:https://www.infoq.com/)

 

top-software-courses

Artificial intelligence New Algorithm replaces Writers, Journalists, and Poets

PT-3 Creative Fiction

Creative writing by OpenAI’s GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling. Plus advice on effective GPT-3 prompt programming & avoiding common errors.

AI-deep-learning

The latest and greatest neural network for unrestricted natural language generation is OpenAI’s GPT-3. GPT-3 is like GPT-1 and the GPT-2 I’ve used extensively before1—only much more so, and then going beyond them in a fascinating new way.

GPT-3’s samples are not just close to human level: they are creative, witty, deep, meta, and often beautiful. They demonstrate an ability to handle abstractions, like style parodies.

Scaling works: quantity is a quality all its own.The scaling of GPT-2-1.5b by 116× to GPT-3-175b has worked surprisingly well and unlocked remarkable flexibility in the form of meta-learning, where GPT-3 can infer new patterns or tasks and follow instructions purely from text fed into it. What can we do with GPT-3? Here, we’re all about having fun while probing GPT-3’s abilities for creative writing tasks, primarily (but far from limited to) poetry. Fortunately, OpenAI granted me access to their Beta API service which provides a hosted GPT-3 model, letting me spend a great deal of time interacting with GPT-3 and writing things. Naturally, I’d like to write poetry with it: but GPT-3 is too big to finetune like I did GPT-2, and OA doesn’t (yet) support any kind of training through their API. Must we content ourselves with mediocre generic poetry, at best, deprived of finetuning directly on chosen poetry corpuses or authors we might like to parody? How much does GPT-3 improve and what can it do?

google-ml-ai

Turns out: a lot! Below, I walk through first impressions of using GPT-3, and countless samples. In the latest twist on Moravec’s paradox, GPT-3 still struggles with commonsense reasoning & factual knowledge of the sort a human finds effortless after childhood, but handles well things like satire & fiction writing & poetry, which we humans find so difficult & impressive even as adults. In addition to the Cyberiad, I’d personally highlight the Navy Seal & Harry Potter parodies, the Devil’s Dictionary of Science/​Academia, “Uber Poem”, “The Universe Is a Glitch” poem (with AI-generated rock music version), & “Where the Sidewalk Ends”.

What BENCHMARK miss

The GPT-3 paper includes evaluation of zero-shot/few-shot performance across a wide range of tasks, but I fear that unless one is familiar with the (deadly dull) benchmarks in question, it won’t be impressive. You can skip to the appendix for more example like its poems, or browse the random samples.

The original OpenAI Beta API homepage includes many striking examples of GPT-3 capabilities ranging from chatbots to question-based Wikipedia search to legal discovery to homework grading to translation; I’d highlight AI Dungeon‘s Dragon model (example), and “Spreadsheets”/“Natural Language Shell”/“Code Completion”2. Andrew Mayne describes using GPT-3 to generate book recommendation lists & read interactive stories & engage in conversations with historical figures like Ada Lovelace3, summarize texts (such as for elementary school children, also available as a service now, Simplify.so) or summarize movies in emoji (Matrix: “🤖🤐”; Hunger Games: “🏹🥊🌽🏆”), convert screenplay ↔︎ story, summarize/​write emails, and rewrite HTML. Paras Chopra finds that GPT-3 knows enough Wikipedia & other URLs that the basic Q&A behavior can be augmented to include a ’source’ URL, and so one can make a knowledge base ‘search engine’ with clickable links for any assertion (ie. the user can type in “What year was Richard Dawkin’s The Selfish Gene published?” and GPT-3 will return a tuple like (“The Selfish Gene was published in 1976″,”https://en.wikipedia.org/wiki/The_Selfish_Gene”) which can be parsed & presented as a search engine). Hendrycks et al 2020 tests few-shot GPT-3 on common moral reasoning problems, and while it doesn’t do nearly as well as a finetuned ALBERT overall, interestingly, its performance degrades the least on the problems constructed to be hardest.

nlp-nural-language-processor

Ryan North experimented with Crunchyroll anime, Star Trek: The Next Generation, & Seinfeld plot summaries. Max Woolf has a repo of GPT-3 example prompts & various completions such as the original GPT-2 “unicorn” article, Revenge of the Sith, Stack Overflow Python questions, and his own tweets (note that many samples are bad because the prompts & hyperparameters are often deliberately bad, eg the temperature=0 samples, to demonstrate the large effect of poorly-chosen settings as a warning). Janelle Shan experimented with weird dog descriptions to accompany deformed GAN-dog samples, and 10,000-year nuclear waste warnings based on the famous 1993 Sandia report on long-time nuclear waste warning messages for the Waste Isolation Pilot Plant. Summers-Stay tried imitating Neil Gaiman & Terry Pratchett short stories with excellent results. Arram Sabetti has done “songs, stories, press releases, guitar tabs, interviews, essays, and technical manuals”, with his Elon Musk Dr. Seuss poems a particular highlight. Paul Bellow (LitRPG) experiments with RPG backstory generation. Merzmensch Kosmopol enjoyed generating love letters written by a toaster. James Yu co-wrote a SF Singularity short story with GPT-3, featuring regular meta sidenotes where he & GPT-3 debate the story in-character. Daniel Bigham plays what he dubs “19 degrees of Kevin Bacon” which links Mongolia to (eventually) Kevin Bacon. Alexander Reben prompted for contemporary art/sculpture descriptions, and physically created some of the ones he liked best using a variety of mediums like matchsticks, toilet plungers, keys, collage, etc.

top-software-courses

Harley Turan found that, somehow, GPT-3 can associate plausible color hex codes with specific emoji. Even more perplexingly, Sharif Shameem discovered that GPT-3 could write JSX (a Javascript+CSS hybrid) according to a specification like “5 buttons, each with a random color and number between 1–10” or increase/​decrease a balance in React or a very simple to-do list and it would often work, or require relatively minor fixes. GPT-3 can also write some simple SVG shapes or SVG/Chart.js bar graphs, do text→LaTeX and SQL queries. While I don’t think programmers need worry about unemployment (NNs will be a complement until they are so good they are a substitute), the code demos are impressive in illustrating just how diverse the skills created by pretraining on the Internet can be. (Source: https://www.gwern.net/)

Unacademy-BPSC

BPSC 65th Mains Exam 2020 from 25 November: Download Bihar CCE Exam New Notice

BPSC 65th Mains Exam 2020 from 25 November: Download Bihar CCE Exam New Notice @bpsc.nic.in, Check Admit Card Updates Here

BPSC-crack
BPSC Exam Preparation

Bihar Public Service Commission (BPSC) is conducting the 65th Combined Competitive Exam Mains Exam 2020 on 25 November, 26 November and 28 November 2020. Check Details Here

BPSC 65th Mains Exam Date 2020: Bihar Public Service Commission (BPSC) is conducting the 65th Combined Competitive Exam Mains Exam 2020 on 25 November, 26 November and 28 November 2020. Earlier, BPSC 65th Mains was scheduled to be held on 13 October, 14 October and 20 October 2020 which is postponed due to unavoidable reasons.

All candidates who have qualified in BPSC 65th Civil Service Prelims Exam can appear for Bihar 65th CCE Exam on scheduled date and time.

BPSC 65th Mains Admit Card 2020:

In order to appear for BPSC CCE Mains Exam 2020 candidates will be required to download the mains admit card. BPSC 65th CCE Mains Exam Admit Card is expected in the last week of October or in the month of November 2020on the official website of BPSC bpsc.nic.in

railway-exams

Amazon-gaming-device

Amazon Echo Dot: The Game Changer in Gaming Device

Amazon redesigns the Echo line with spherical speakers and swiveling screens

male-character

Amazon announced a lot of new products including the new Echo speaker line. And the products look dramatically different. Gone are the cylinders. They’re spheres now and look like nothing else on the market. The lights are now on the bottom, while there are still rounded buttons on the top.

Buy Echo Dot at Amazon

The Alexa software got an upgrade, too. The new capabilities allow it to become more personalized as it can now ask clarifying questions and then use this personalized data to interact with the user later on. When you ask Alexa to set the temperature to your “favorite setting,” for example, she will now ask what that setting is. The real breakthrough, though, is the conversation mode. In today’s demo, the company showed how Alexa could work when you’re ordering a pizza, for example. One of the actors said she wasn’t that hungry and wanted a smaller pizza. Alexa automatically changed that order for her. The team calls this “natural turn taking.” 

New York-Satellite

Space technology is enabling advancement on Earth

Some of the most important, consequential, and, frankly, coolest new innovations on Earth are being driven by new technology up in space.

gps-tracker-course
Learn “The Unity GPS Course”

Satellites and the images and data they produce have long been used for crucial scientific study and vital military operations. The launch of NASA’s Nimbus program in the 1960s heralded a new age of meteorology and weather forecasting, enabling scientists to achieve a revolutionary new understanding of this planet and how humans impact it. Around the same time, satellites also became part of an expanded American defense strategy, with the CIA’s Corona satellite program providing essential intelligence during the Cold War.

With breakthroughs that have made satellites both more powerful and cheaper to launch, they are now also powering remarkable advances in cutting-edge technologies here on Earth and out in space. Among their many invaluable contributions, satellite imaging is central to rapid advances in a wide range of industries and innovations, from autonomous vehicles and 5G networks to NASA’s ambitious Mars missions.

Microsoft Azure
Learn Microsoft Azure Course

It’s almost hard to believe, but it wasn’t all that long ago that people were unfolding big paper maps to plan road trips and figure out how to get from one place to another. Dashboard GPS technology changed driving forever, and over time, products like Google Maps and other digital navigation services have continued to improve the experience of navigating from behind the wheel.

For the most part, mapping out the Earth for these products has been an arduous task performed by special cars that are equipped with LiDAR sensors and drive around acquiring data block by block. They were a neat development when first introduced, but a fleet of cars has serious limitations — steering around paved streets means that they can only cover so much land and offer a limited amount of information, which is in turn only sporadically updated. As both the automotive industry and other tech sectors continue to advance and expand, though, far more precise and extensive maps are needed, which is where satellite imaging comes in.

Satellites that produce high definition and multispectral imagery as well as advanced geospatial analytics are driving new developments that make navigating easier, roads safer, and rides more available. GPS is enhanced by vastly more detailed and accurate maps, which are created by satellites’ ability to capture a location’s topography and its spatial context in exacting detail. The satellites are also able to cover far more territory and return to spots many times a day for much more accurate information.

Any app that requires a user’s location relies on mapping, and the more details available, the better an experience the app can offer. Rideshare services like Uber and Lyft utilize these more precise maps to help their drivers navigate and facilitate faster and easier passenger pick-ups and drop-offs; the maps expand the areas in which companies can operate, as well. Maxar Technologies, the industry leader in high definition satellite imaging, provides rideshare services with images and data to facilitate improved service and much larger innovations. (Read More: Source: Tech Chrunch)

cds-2020

CDS Combined Defence Services Entrance Examination 2020

Combined Defence Services Examination [CDS] is one of the best opportunities in the lives of the candidates who are preparing for the Military examinations. This exam is conducted by the Union Public Services Commission (UPSC) twice a year in the month of February and November to conduct officers in the Defence Forces: Indian Army, Indian Navy & Indian Air Force.

cds-preparation

The 2020-21 edition of the ‘Pathfinder CDS Entrance Examination’ is a complete self-study guide that is designed for the absolute preparation of the Combined Defence Services Examination.

1. Pathfinder CDS Entrance Examination – prescribed under UPSC Guidelines.
2. The Self Study Guide divides the entire syllabus into 4 Major Sections
3. Provides 5 Previous Years’ Solved Paper [2019-2017]
4. More than 800 MCQs for quick revision of topics
5. Chapterwise division of Previous Years’ Questions.
6. Questions covered in the book give a deep insight of paper pattern, its types, and weightage in the exam.

Packed with such comprehensive study resources, this is a perfect book to receive the best guidance for the upcoming CDS Entrance Exam to strive towards success.