Recently Browsing 0 members
No registered users viewing this page.
Amazon hands out awards to exceptional computer science and robotics teachers
by Paul Hill
Amazon has awarded ten computer science and robotics teachers its Amazon Future Engineer Teacher of the Year Award. It wanted to congratulate the teachers on providing an education in those fields to students from underserved and historically underrepresented communities so they can get into the fields of computer science and robotics.
Discussing the award, Victor Reinoso, Global Director, Amazon Future Engineer, Amazon in the Community, said:
Ten teachers from across the United States were recognised for their work, noted below:
To be selected for an award, the teachers had to meet certain criteria. They had to promote diversity and inclusion in computer science, get a recommendation from a school administrator, and have compelling anecdotes from the school and their students. Scholarship America then reviewed the applications and selected ten recipients to win the awards.
Aside from the award, Amazon provided the winners with Amazon Future Engineer swag and a package valued at $30,000 which included $25,000 to expand the computer science and robotics lessons at the respective schools and a $5,000 cash reward for the nominated educators for the work they put in.
If you’re a teacher in computer science or robotics or know somebody who is, you can sign up for an email reminder to apply in the fall for the next Amazon Future Engineer Teacher of the Year Award.
By News Staff
Get this Build a Bundle: Learn Game Development, for free
by Steven Parker
Today's highlighted deal comes via our Online Courses section of the Neowin Deals store, where you can get this Build a Bundle: Learn Game Development for free. First-person shooter, micro-RPG, & micro-strategy — learn up-to-date skills and build real games with 3 hands-on courses from Zenva.
The fastest track to success is to learn by doing and in this 3-course bundle, you'll be able to build real games as you follow along with the lessons. You'll be working with Godot and Unity engines. Not only will you boost your overall game development skills within these game development engines, but gain essential, fundamental knowledge for coding a variety of strategy game systems that can be expanded further for larger, turn-based projects. Including downloadable project files, these courses will take you through the process of creating 3 different kinds of games — first-person shooter, micro-RPG, and micro-strategy games. Start learning now and come up with real projects to put in your portfolio.
Build a First-Person Shooter with Godot Build a Micro-RPG Build a Micro-Strategy Game Description
Access 50 lectures & 5 hours of content 24/7 Create a first-person shooter, complete with attacking enemies, pickups, & more Gain integral skills for manipulating objects in 3D space & for scripting various FPS mechanics with Godot’s GDScript language Create a top-down, 2D micro-RPG in Unity Learn a variety of skills necessary to create a full-fledged RPG Master the basics & learn transferable skills that can be applied to your larger projects Create a turn-based, micro-strategy game about building & managing a colony on Mars This Build a Bundle: Learn Game Development normally costs $99, but you can pick it up for free for a limited time. For a full description, specs, and instructor info, click the link below.
Get this free bundle, or learn more about it
ChronoWatch Multi-Function Smart Watch (Blue)
16 high-tech sports functions including calorie, step, and heart rate tracking with text & call alerts on full capacitive touch screen. Also available in Black, Grey and Rose Gold.
Get this Price Dropped deal for just $32.97 (list price $199)
Not for you?
That's OK, there are other deals on offer you can check out here.
Ivacy VPN - 5 year subscription for just $1 per month NordVPN - 2 year subscription at up to 68% off Private Internet Access VPN - subscriptions at up to 71% off Unlocator VPN or SmartDNS - unblock Geoblock with 7-day free trial Neowin Store for our preferred partners. Subscribe to Neowin - for $14 a year, or $28 a year for Ad-Free experience Giveaways: Polycade Home Arcade | $5K in cash | $10K in Crypto | Gaming Bundle Neowin Deals · Free eBooks · Neowin Store
Disclosure: This is a StackCommerce deal or giveaway in partnership with Neowin; an account at StackCommerce is required to participate in any deals or giveaways. For a full description of StackCommerce's privacy guidelines, go here. Neowin benefits from shared revenue of each sale made through our branded deals site, and it all goes toward the running costs.
By Jay Bonggolto
Huawei to launch HarmonyOS and new devices on June 2
by Jay Bonggolto
Huawei unveiled HarmonyOS in 2019, its homegrown operating system designed to run on various smart devices including smartphones, wearables, wireless earbuds, laptops, tablets, and self-driving cars. A year later, the company announced a version of the OS specifically built for smartphones, dubbed HarmonyOS 2.0, though it was not meant for release until sometime in 2021.
Today, the Chinese phone maker posted a new video online teasing the upcoming launch of HarmonyOS and other products on June 2. The teaser was shared on Twitter.
It's not clear whether the event will be China-only or worldwide, but it's expected to mark a new milestone in Huawei's efforts to cut its reliance on Android after U.S. sanctions prevented Google from providing support to its mobile devices. Huawei didn't say as well whether it's launching a new smartphone in June, apart from indicating that it would unveil new products in addition to HarmonyOS.
Huawei positions the new operating system as a key step in addressing the impact of U.S. sanctions that adversely affected its business worldwide. Aside from the Google ban, Huawei's access to critical U.S. technology that's necessary to manufacture its own Kirin processor was blocked.
The company's solution is to focus on its software ecosystem. Huawei's founder and CEO, Ren Zhengfei, most recently called on employees to "dare to lead the world" in software in a move to counter the impact of U.S. sanctions, according to an internal memo. He said transitioning to software and services will give the company "greater independence and autonomy" as these are beyond the reach of U.S. control.
By Ather Fawaz
AI won't be taking up software engineering jobs any time soon, but it's getting there
by Ather Fawaz
Last Sunday, we looked at OpenAI's latest work in which the firm trained diffusion models to generate deepfakes and subsequently achieved a new state-of-the-art in multiple image generation tasks. Today, we shift gears and focus on another big and recent development in the field of artificial intelligence—transformer models.
Transformer models came to the forefront with Google's open-source implementation of BERT. By improving on the shortcomings of RNNs and LSTMs, this deep learning architecture revolutionized the field of natural language processing and generation. We first saw the potency of such language models in the form of OpenAI's GPT-2 with 1.5 billion parameters when the language model produced news, stories, lyrics, and other pieces of text that could easily be mistaken as a piece of work by a human and not a language model. Soon after, the GPT-3—successor to the GPT-2—essentially borrowed all the best bits from its predecessor and with 175 billion parameters to back it up, produced work that sounded shockingly cohesive, sophisticated, and factually correct. Since the training dataset for this language model was basically the entire internet, we could ask it to produce pretty much anything that is publicly available in textual form on the internet. Stories, lyrics, news pieces, and conversations aside, the GPT-3 even wrote valid CSS and HTML code. The last of these, a language model's ability to write code, is what we shall be focusing on today.
A couple of days back, a team of researchers comprising of individuals from UC Berkeley, UChicago, UIUC, and Cornell published a paper in which it gauged the ability of the best language models of today in writing code. In the paper titled Measuring Coding Challenge Competence with APPS, the researchers essentially put these language models in the shoes of an individual who's taking a programming interview where their ability to understand a given problem and code its solution is being tested. To do this, the team introduces a new dataset called the Automated Programming Progress Standard (APPS).
The dataset consists of 10,000 coding problems split into three categories (Introductory, Interview, Competition) and written in plain English that is typically expected in programming interviews today. These problems were taken from open-access sites like Codewars AtCoder, Kattis, and Codeforces, where programmers share coding problems with each other. To validate the provided solutions, the dataset contains 131,836 test cases and 232,444 ground-truth solutions written by humans in Python.
The following image shows an excerpt from the dataset:
With the APPS dataset prepared, the researchers trained three of the best language models openly available today: GPT-2, GPT-3, and GPT-Neo (a free alternative to the closed-source GPT-3). Once the training was complete, the models were evaluated and compared against each other.
The team researchers found out that while there are definite positives, understanding and coding problems is still a notoriously challenging task for even the best language models that we have today.
For the positives, the models demonstrated the ability to understand the problem, write import statements, define classes, and form program flow. Here is a sample from GPT-2, the smallest of the three models, on a test sample for which it passed all 18/18 test cases:
And here's an example of what the GPT-3 produced for a separate problem.
Most evidently, the models sometimes suffered from syntax errors. But the larger models were more resilient against them and more fine-tuning and training exponentially decreased these syntax errors. There were also times when the solution given by these models would pass off as correct at the first glance despite failing all test cases once validated.
The team thinks that a possible 'memorization' of code blocks from the training set might be the culprit here. To tackle such problems, usually, the idea is that we need more trainable parameters. Overall, it is clear from the results above that while language models have come a long way in conversational abilities and creative and formal writing, their ability to code is still lackluster. But it's definitely getting there.
Moving forward, the team envisaged that as language models keep growing larger and more robust, concerns about malicious code and automation might arise in the future. For those times, the APPS dataset proposed here might come in handy. For now, it doesn't seem like language models have a shot at landing a decent software engineering job. More details can be found at this GitHub repository or the pre-print repository arXiv.
The low-level Rust programming language has just turned six years old
by Paul Hill
The developers behind the Rust programming language celebrated six years since the launch of version 1.0 on Saturday. In its fairly short life, it has gained a lot of interest as a replacement for C thanks to the code safety features that are on by default which lead to fewer memory-related bugs that can be exploited. This memory safety has caught the imagination of many and Stack Overflow even found it to be the most loved programming language in its 2020 survey.
Earlier this year, the Rust Foundation was established to look after its namesake language following lay-offs at Mozilla which was previously maintaining the project. The creation of the Rust Foundation was one of the biggest events in the language’s six-year history and will be one of the most important going forward.
One of the most notable projects to adopt Rust to date is Mozilla’s web browser, Firefox. Since Firefox 48, Mozilla has been inserting more and more Rust code into its flagship browser to increase the overall speed of the browser and increase security thanks to the elimination of memory leaks. As of July 2020, 12.31% of code in Firefox Nightly on macOS was written using Rust code compared to 6.24% back in November 2018.
Going forward, Google is planning to move low-level components on Android to Rust with work already starting on the Bluetooth stack. Meanwhile, the Linux kernel developers are contemplating whether to allow code to be written in Rust for the creation of safer driver and kernel-space code. If you’d like to see where Rust is heading this year on a technical level, be sure to read their recent blog post on the matter.