Course

Indian-Post-office-recruitment

Indian Post Office Recruitment in 2020, Apply for 1029 posts

Post Office Recruitment 2020: Those candidates who are interested in the vacancy details should go through the notification in detail and then only submit their online application as per the eligibility criteria.

Indian Postal Department’s Maharashtra circle has issued a notification for the recruitment of 1371 posts. Of these vacant posts, the maximum number of 1029 posts belongs to postmen, while 327 posts are of Multi Tasking Staff (MTS) and the rest 15 positions are of Mail guard.

Microsoft Course
Learn Microsoft Office 365
  • Highlights
    The notification was released on September 29 and the process of online application was expected to begin from October 5.
  • Maharashtra Postal Circle later informed that candidates would be able to submit their online application from October 12.
  • The last date for submission of applications for these posts is November 10 till 11.59 pm

Those candidates who are interested in the vacancy details should go through the notification in detail and then only submit their online application as per the eligibility criteria. Interested can submit their application at Maharashtra circle’s official website at https://www.indiapost.gov.in/VAS/Pages/Content/Recruitments.aspx?Category=Recruitment,  on or before November 10. 2020.

The notification for said posts was released on September 29 and the process of online application was expected to begin from October 5, but the Maharashtra Postal Circle later informed through another notice on October 7 that the candidates would be able to submit their online application from October 12 from 10 am.

The last date for submission of applications for these posts is November 10 at 11.59 pm.

Click Here to find details of Vacancy

RMLH-recruitment

RML Recruitment for Assistant Professor

Walk-in for 33 Assistant Professor Posts, Salary Rs. 97,000

Dr Ram Manohar Lohia Hospital (RML) has invited applications for the recruitment to the posts of Assistant Professor. Eligible persons can appear for walk-in-interview on 14, 15 and 16 October 2020. 

upsc

Dr. RML Hospital, New Delhi Job Notification: Dr Ram Manohar Lohia Hospital (RML) has invited applications for the recruitment to the posts of Assistant Professor. Eligible persons can appear for walk-in-interview on 14, 15, and 16 October 2020.

Important Date:

Walk-in-interview Date: 14, 15 and 16 October 2020
Dr Ram Manohar Lohia Hospital (RML) Assistant Professor Vacancy Details:

NEET-Medical

Medicine & Emergency Medicine: 15 Posts

Anesthesia:  08 Posts
Respiratory Medicine:  03 Posts
Critical Care Medicine:  05 Posts
Obst. & Gynaecology:  01 Post
Pathology:  01 Post

Click Here to Apply online at Dr. Ram Manohar Lohia Hospital

gpt3-ai-model

How GPT-3 is Revolutionizing Artificial Intelligence

GPT-3 has been created by OpenAI, a research business co-founded by Elon Musk, and has been described as the most important and useful advance in AI for years.

Advanced AI: Deep Reinforcement Learning

But there’s some confusion over exactly what it does (and indeed doesn’t do), so here I will try and break it down into simple terms for any non-techy readers interested in understanding the fundamental principles behind it. I’ll also cover some of the problems it raises, as well as why some people think its significance has been overinflated somewhat by hype.

What is GPT-3?
Starting with the very basics, GPT-3 stands for Generative Pre-trained Transformer 3 – it’s the third version of the tool to be released.

In short, this means that it generates text using algorithms that are pre-trained – they’ve already been fed all of the data they need to carry out their task. Specifically, they’ve been fed around 570gb of text information gathered by crawling the internet (a publicly available dataset known as CommonCrawl) along with other texts selected by OpenAI, including the text of Wikipedia.

If you ask it a question, you would expect the most useful response would be an answer. If you ask it to carry out a task such as creating a summary or writing a poem, you will get a summary or a poem.

More technically, it has also been described as the largest artificial neural network every created – I will cover that further down.

nlp-nural-language-processor

What can GPT-3 do?
GPT-3 can create anything that has a language structure – which means it can answer questions, write essays, summarize long texts, translate languages, take memos, and even create computer code.

In fact, in one demo available online, it is shown creating an app that looks and functions similarly to the Instagram application, using a plugin for the software tool Figma, which is widely used for app design.

This is, of course, pretty revolutionary, and if it proves to be usable and useful in the long-term, it could have huge implications for the way software and apps are developed in the future.

As the code itself isn’t available to the public yet (more on that later), access is only available to selected developers through an API maintained by OpenAI. Since the API was made available in June this year, examples have emerged of poetry, prose, news reports, and creative fiction.

This article is particularly interesting – where you can see GPT-3 making a – quite persuasive – attempt at convincing us humans that it doesn’t mean any harm. Although its robotic honesty means it is forced to admit that “I know that I will not be able to avoid destroying humankind,” if evil people make it do so!

How does GPT-3 work?
In terms of where it fits within the general categories of AI applications, GPT-3 is a language prediction model. This means that it is an algorithmic structure designed to take one piece of language (an input) and transform it into what it predicts is the most useful following piece of language for the user.

It can do this thanks to the training analysis it has carried out on the vast body of text used to “pre-train” it. Unlike other algorithms that, in their raw state, have not been trained, OpenAI has already expended the huge amount of compute resources necessary for GPT-3 to understand how languages work and are structured. The compute time necessary to achieve this is said to have cost OpenAI $4.6 million.

To learn how to build language constructs, such as sentences, it employs semantic analytics – studying not just the words and their meanings, but also gathering an understanding of how the usage of words differs depending on other words also used in the text.

It’s also a form of machine learning termed unsupervised learning because the training data does not include any information on what is a “right” or “wrong” response, as is the case with supervised learning. All of the information it needs to calculate the probability that it’s output will be what the user needs is gathered from the training texts themselves.

This is done by studying the usage of words and sentences, then taking them apart and attempting to rebuild them itself.

For example, during training, the algorithms may encounter the phrase “the house has a red door.” It is then given the phrase again, but with a word missing – such as “the house has a red X.”

It then scans all of the text in its training data – hundreds of billions of words, arranged into meaningful language – and determines what word it should use to recreate the original phrase.

gps-tracker-course

To start with, it will probably get it wrong – potentially millions of times. But eventually, it will come up with the right word. By checking its original input data, it will know it has the correct output, and “weight” is assigned to the algorithm process that provided the correct answer. This means that it gradually “learns” what methods are most likely to come up with the correct response in the future.

The scale of this dynamic “weighting” process is what makes GPT-3 the largest artificial neural network ever created. It has been pointed out that in some ways, what it does is nothing that new, as transformer models of language prediction have been around for many years. However, the number of weights the algorithm dynamically holds in its memory and uses to process each query is 175 billion – ten times more than its closest rival, produced by Nvidia. Read More at: (source:https://www.forbes.com/sites/bernardmarr/2020/10/05/what-is-gpt-3-and-why-is-it-revolutionizing-artificial-intelligence/#783916b7481a)

Microsoft-recruitment

Microsoft Recruitment Drive| Software Engineer | BE/ B.Tech

Microsoft Recruitment 2020 | Entry Level | Software Engineer | 2017 – 2019 Batch | BE/ B.Tech – Computers/ IT Engineering | Noida

microsoft-Azure-certification
Microsoft Azure Certification

Company: Microsoft India (R&D) Pvt Ltd
Microsoft GTSC was established in October of 2003 in India, it is part of Microsoft’s Customer Service and Support (CSS) organization which has locations throughout the Americas, EMEA, Asia Pacific, and Greater China. Microsoft’s CSS organization supports over 170 Microsoft products which range from the Consumer to Enterprise customer segments.

This includes the MSN and Home and Entertainment products as well as the more deeply technical products from Developer Support and Enterprise Platform Support to Enterprise Messaging Support and Enterprise Business Applications Support. The site in Bangalore is a part of a global network that has over 50 million customer touchpoints on an annual basis and provides services to the Consumer and Enterprise customer segments.

Learn Microsoft Fundamentals

Positions: Software Engineer 1

Job Location: Noida

Salary: Best in Industry

Experience: 1+ Year

Qualifications: B.E/ B.Tech in Computer Science or equivalent from a recognized University with one year experience in the concerned field.

Requirements: Experience in working with distributed teams.
Experience in working with highly complex software services at internet scale
1+ years of experience in cloud and related technologies (e.g. Azure, AWS, K8S, etc.)
Degree in CS or equivalent experience
Responsibilities:

This is an SWE 1 role with the opportunity for deep technical impact, product excellence, and culture role model with an opportunity for advancement. The role involves the application of computer science fundamentals, problem-solving, and growth mindset to design and implement various systems and features for next-generation cloud infrastructure.

This role provides significant opportunity to make an impact at massive scale, learning, to experience product development at scale, and significant mentoring & coaching opportunity with room to grow in IC/Managerial career path.

Microsoft-Team
Learn Microsoft Team
google-news-initiative-for-journalists

Google unveils its $300M News Initiative for Journalism Industry

Google  today announced a multi-pronged News Initiative, which Chief Business Officer Phillipp Schindler described as a way to tie together all the company’s efforts to work with the journalism industry.

Google-News-Initiative

Google says the News Initiative is focused on three broad goals — strengthening quality journalism, supporting sustainable business models and empowering newsrooms through technological innovation. It’s also committing to spend $300 million over the next three years on its various journalism-related projects.

At a New York City press event, Schindler told journalists and other industry attendees, “Our mission is inherently tied to your business.” He acknowledged that this might sound like “big company rhetoric.” To put it less diplomatically, news organizations might not view Google or the other big Internet platforms as allies given their dominance of the online ad business and the role they play in spreading sensationalistic or questionable stories, not to mention misinformation and hoaxes.

However, Schindler said Google has “two clear business incentives” to support high quality journalism.

First, he said Google search “by its very nature depends on the open web and depends on open access to information and that obviously depends on high quality information.” Second, he noted that Google’s DoubleClick ad business is all about splitting revenue with publishers, with $12.6 billion paid out to partners last year. “The economics are very clear: If you do not grow, we do not grow,” Schindler said. (Source: Techcrunch)

GPT-3-AI-Model

Microsoft Obtains Exclusive License for GPT-3 AI Model

Microsoft announced an agreement with OpenAI to license OpenAI’s GPT-3 deep-learning model for natural-language processing (NLP). Although Microsoft’s announcement says it has “exclusively” licensed the model, OpenAI will continue to offer access to the model via its own API.

AI-deep-learning

Microsoft announced an agreement with OpenAI to license OpenAI’s GPT-3 deep-learning model for natural-language processing (NLP). Although Microsoft’s announcement says it has “exclusively” licensed the model, OpenAI will continue to offer access to the model via its own API.

Microsoft CTO Kevin Scott wrote about the agreement on Microsoft’s blog. The deal builds on an existing relationship between the two organizations, which includes a partnership in building a supercomputer on Microsoft’s Azure cloud platform. OpenAI recently used that supercomputer to train GPT-3, which at 175 billion parameters is one of the largest NLP deep-learning models trained to date. Scott said the licensing of GPT-3 will:

[Allow] us to leverage its technical innovations to develop and deliver advanced AI solutions for our customers, as well as create new solutions that harness the amazing power of advanced natural language generation.

GPT-3 is the third iteration of OpenAI’s Generative Pre-Trained Transformer model. The original GPT model was released in 2018 and contained 117 million parameters. For the next iteration, GPT-2, OpenAI scaled up the model more than 10x, to 1.5 billion parameters. Because the text generated by GPT-2 could often be as “credible” as text written by humans, OpenAI at first declined to release the full model, citing potential for misuse in generating “deceptive, biased, or abusive language at scale.” However, by November 2019, OpenAI had seen “no strong evidence of misuse” and decided to release the model.

In July 2019, Microsoft and OpenAI announced a partnership, which included a $1 billion investment from Microsoft, to “jointly build new Azure AI supercomputing technologies.” OpenAI also agreed to run its services on Azure and to make Microsoft its “preferred partner for commercializing new AI technologies.” During its Build conference this May, Microsoft showcased the supercomputer built for OpenAI on its Azure cloud platform: “a single system with more than 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server.”

GPT-3, announced earlier this year, was a 100x scale-up of GPT-2 and set new state-of-the-art results on several NLP tasks. The training dataset contained nearly half a trillion words. Training the model on the Azure supercomputer consumed “several thousand petaflop/s-days of compute” and is estimated to have cost from $4.6 million to $12 million. As with GPT-2, OpenAI has not released the trained model; however, OpenAI did release a limited-access web API for developers to make calls to the model from their apps.

The licensing deal with Microsoft is the latest of several recent moves by OpenAI to monetize their technology. Originally founded as a non-profit, OpenAI launched a new “hybrid of a for-profit and nonprofit” or “capped-profit” called OpenAI LP in March 2019. The goal of the new company was to “raise investment capital and attract employees with startup-like equity.” OpenAI’s API page contains an FAQ section that defends its commercial products as “one of the ways to make sure we have enough funding to succeed.” While the terms of the Microsoft license have not been disclosed, OpenAI claims that it has “no impact” on users of OpenAI’s API, who can “continue building applications…as usual.”

With the license agreement being touted as “exclusive,” and given OpenAI’s past reluctance to release their trained models, many commenters have joked that the company should change its name to “ClosedAI.” One Hacker News reader questioned the long-term commercial viability of GPT-3:

google-ml-ai

Anyone else feel like this idea of commercializing GPT-3 is bound to go nowhere as the research community figures out how to replicate the same capabilities in smaller cheaper open models within a few months or even a year?

The OpenAI API is currently in beta, with a waitlist for gaining access. The 1.5-billion parameter GPT-2 model is available on GitHub. (Source:https://www.infoq.com/)

 

bpsc-66th-notofication

Bihar PCS Prelims 2020: How to apply for 66th BPSC Exam

Bihar Public Service Commission, BPSC has begun the registration process of 66th BPSC Prelims Exam 2020 on its official website bpsc.bih.nic.in- Check how to apply, eligibility and vacancy details here

bpsc-online

Bihar Public Service Commission, BPSC has begun with the registration process for 66th Combined Civil Services Recruitment Exam, BPSC 66th Prelims 2020. The commission had released the notification for 562 posts, with 169 being reserved for women. Those who wish to appear can know how to apply on bpsc.bih.nic.in. Check here for dates, eligibility details also here. The final date to apply for the exams is October 20, 2020. The exam would be conducted in three phases the first being the preliminary test to be conducted on December 27, 2020. Bihar PCS Prelims 2020: How to apply for 66th BPSC Exam Candidates need to visit the official website of Bihar PSC, bpsc.bih.nic.in Go to the link that reads application form for BPSC CSE 2020 Register yourself using your email id and password Using the same registration number now fill the application form, Direct Link

top-software-courses

Artificial intelligence New Algorithm replaces Writers, Journalists, and Poets

PT-3 Creative Fiction

Creative writing by OpenAI’s GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling. Plus advice on effective GPT-3 prompt programming & avoiding common errors.

AI-deep-learning

The latest and greatest neural network for unrestricted natural language generation is OpenAI’s GPT-3. GPT-3 is like GPT-1 and the GPT-2 I’ve used extensively before1—only much more so, and then going beyond them in a fascinating new way.

GPT-3’s samples are not just close to human level: they are creative, witty, deep, meta, and often beautiful. They demonstrate an ability to handle abstractions, like style parodies.

Scaling works: quantity is a quality all its own.The scaling of GPT-2-1.5b by 116× to GPT-3-175b has worked surprisingly well and unlocked remarkable flexibility in the form of meta-learning, where GPT-3 can infer new patterns or tasks and follow instructions purely from text fed into it. What can we do with GPT-3? Here, we’re all about having fun while probing GPT-3’s abilities for creative writing tasks, primarily (but far from limited to) poetry. Fortunately, OpenAI granted me access to their Beta API service which provides a hosted GPT-3 model, letting me spend a great deal of time interacting with GPT-3 and writing things. Naturally, I’d like to write poetry with it: but GPT-3 is too big to finetune like I did GPT-2, and OA doesn’t (yet) support any kind of training through their API. Must we content ourselves with mediocre generic poetry, at best, deprived of finetuning directly on chosen poetry corpuses or authors we might like to parody? How much does GPT-3 improve and what can it do?

google-ml-ai

Turns out: a lot! Below, I walk through first impressions of using GPT-3, and countless samples. In the latest twist on Moravec’s paradox, GPT-3 still struggles with commonsense reasoning & factual knowledge of the sort a human finds effortless after childhood, but handles well things like satire & fiction writing & poetry, which we humans find so difficult & impressive even as adults. In addition to the Cyberiad, I’d personally highlight the Navy Seal & Harry Potter parodies, the Devil’s Dictionary of Science/​Academia, “Uber Poem”, “The Universe Is a Glitch” poem (with AI-generated rock music version), & “Where the Sidewalk Ends”.

What BENCHMARK miss

The GPT-3 paper includes evaluation of zero-shot/few-shot performance across a wide range of tasks, but I fear that unless one is familiar with the (deadly dull) benchmarks in question, it won’t be impressive. You can skip to the appendix for more example like its poems, or browse the random samples.

The original OpenAI Beta API homepage includes many striking examples of GPT-3 capabilities ranging from chatbots to question-based Wikipedia search to legal discovery to homework grading to translation; I’d highlight AI Dungeon‘s Dragon model (example), and “Spreadsheets”/“Natural Language Shell”/“Code Completion”2. Andrew Mayne describes using GPT-3 to generate book recommendation lists & read interactive stories & engage in conversations with historical figures like Ada Lovelace3, summarize texts (such as for elementary school children, also available as a service now, Simplify.so) or summarize movies in emoji (Matrix: “🤖🤐”; Hunger Games: “🏹🥊🌽🏆”), convert screenplay ↔︎ story, summarize/​write emails, and rewrite HTML. Paras Chopra finds that GPT-3 knows enough Wikipedia & other URLs that the basic Q&A behavior can be augmented to include a ’source’ URL, and so one can make a knowledge base ‘search engine’ with clickable links for any assertion (ie. the user can type in “What year was Richard Dawkin’s The Selfish Gene published?” and GPT-3 will return a tuple like (“The Selfish Gene was published in 1976″,”https://en.wikipedia.org/wiki/The_Selfish_Gene”) which can be parsed & presented as a search engine). Hendrycks et al 2020 tests few-shot GPT-3 on common moral reasoning problems, and while it doesn’t do nearly as well as a finetuned ALBERT overall, interestingly, its performance degrades the least on the problems constructed to be hardest.

nlp-nural-language-processor

Ryan North experimented with Crunchyroll anime, Star Trek: The Next Generation, & Seinfeld plot summaries. Max Woolf has a repo of GPT-3 example prompts & various completions such as the original GPT-2 “unicorn” article, Revenge of the Sith, Stack Overflow Python questions, and his own tweets (note that many samples are bad because the prompts & hyperparameters are often deliberately bad, eg the temperature=0 samples, to demonstrate the large effect of poorly-chosen settings as a warning). Janelle Shan experimented with weird dog descriptions to accompany deformed GAN-dog samples, and 10,000-year nuclear waste warnings based on the famous 1993 Sandia report on long-time nuclear waste warning messages for the Waste Isolation Pilot Plant. Summers-Stay tried imitating Neil Gaiman & Terry Pratchett short stories with excellent results. Arram Sabetti has done “songs, stories, press releases, guitar tabs, interviews, essays, and technical manuals”, with his Elon Musk Dr. Seuss poems a particular highlight. Paul Bellow (LitRPG) experiments with RPG backstory generation. Merzmensch Kosmopol enjoyed generating love letters written by a toaster. James Yu co-wrote a SF Singularity short story with GPT-3, featuring regular meta sidenotes where he & GPT-3 debate the story in-character. Daniel Bigham plays what he dubs “19 degrees of Kevin Bacon” which links Mongolia to (eventually) Kevin Bacon. Alexander Reben prompted for contemporary art/sculpture descriptions, and physically created some of the ones he liked best using a variety of mediums like matchsticks, toilet plungers, keys, collage, etc.

top-software-courses

Harley Turan found that, somehow, GPT-3 can associate plausible color hex codes with specific emoji. Even more perplexingly, Sharif Shameem discovered that GPT-3 could write JSX (a Javascript+CSS hybrid) according to a specification like “5 buttons, each with a random color and number between 1–10” or increase/​decrease a balance in React or a very simple to-do list and it would often work, or require relatively minor fixes. GPT-3 can also write some simple SVG shapes or SVG/Chart.js bar graphs, do text→LaTeX and SQL queries. While I don’t think programmers need worry about unemployment (NNs will be a complement until they are so good they are a substitute), the code demos are impressive in illustrating just how diverse the skills created by pretraining on the Internet can be. (Source: https://www.gwern.net/)