Block coding for all modern LEGO® hubs

Endless creativity and fun with smart LEGO® bricks using Pybricks

November, 2023 – Pybricks Headquarters: Today, the Pybricks team presents the first beta release of block coding for all modern LEGO® hubs. For the first time, fans of all LEGO themes can bring their smart bricks together in a single app for endless possibilities and creativity.

Whether you want to make smart train layouts, autonomous Technic machines, interactive BOOST creatures, or super-precise SPIKE and MINDSTORMS robots, you can do it with Pybricks.

Pybricks is beginner-friendly and easy to use. There’s no need to install complicated apps or libraries either. Just go to https://beta.pybricks.com, update the firmware, and start coding.

And now for the first time, no prior Python coding experience is required. You can code with familiar but powerful blocks, and gradually switch to Python when you’re ready. The live preview makes it easy to see how your blocks translate to Python code.

Meanwhile, more seasoned builders and robotics teams will enjoy advanced features such as color sensor calibration or builtin gyro control for drive bases.

The new block coding experience is exclusively available to our supporters on Patreon. You can sign up for a monthly subscription or make a one-time pledge in our shop for lifetime access.

Python coding remains entirely free and open source, and continues to be supported by a community of developers and LEGO enthusiasts around the world. Improvements are made almost every day, with the lead developers actively engaging with the community for ideas, bug fixes, and brand new features.

So grab your LEGO sets and start coding!

ABB is the first manufacturer to provide intuitive, block-based no-code programming for all cobots and six-axis industrial robots

  • First-time users can program their collaborative robots and industrial robots for free within minutes
  • System integrators and experienced users can develop, share, and customize sophisticated programs for application-specific features

ABB Robotics has expanded the scope of its free Wizard Easy Programming software for collaborative robots to include all six-axis industrial robots running on an ABB OmniCore™ controller. This makes ABB the first robot manufacturer to offer an easy-to-use no-code programming tool for cobots and six-axis industrial robots. This lowers the barriers to automation for early adopters and provides ecosystem partners and integrators with an efficient tool to support their customers.

„If we want to promote and advance the use of robotic automation on a global scale, we need to address the challenges and opportunities of the industry,“ says Marc Segura, head of the robotics division at ABB. „By adding our six-axis industrial robots to Wizard Easy Programming, ABB Robotics is responding to the skills shortage and increasing demand from manufacturing companies for simple and easy-to-use programming software for their robot fleets.“

Create robot applications without prior training

Wizard Easy Programming uses a graphic, drag-and-drop, no-code programming approach designed to simplify the development of robotic applications. The software allows both first-time and experienced robot users to create applications in minutes – a task that typically requires a week of training and another week of development work. Since its launch in 2020, Wizard Easy Programming has been used in a wide range of applications in conjunction with ABB’s YuMi, SWIFTI™ and GoFa™ collaborative robots.

Wizard Easy Programming, previously available for ABB’s collaborative robots, is now available for all of the company’s six-axis industrial robots. (Image: ABB)

The software offers users the opportunity to create complete programs for applications such as arc welding or machine tending without prior training. An intuitive graphical user interface allows you to customize existing programs and pre-programmed blocks to control various actions – from robot movements to signal instructions and force control – for added flexibility.

Efficiently generate specific codes for specific applications

Wizard Easy Programming also includes Skill Creator, a tool that helps system integrators and experts create custom, application-specific wizard blocks for their customers. Skill Creator simplifies the creation of new blocks for highly specific tasks such as machine tending and welding, but also for difficult applications such as medical tests. Ecosystem partners who develop accessories such as grippers, feeding systems and cameras will have access to a digital tool that allows them to share product-specific functionalities regardless of the type of robot to be used.

Wizard Easy Programming is pre-installed on all cobots and new six-axis industrial robots running ABB’s OmniCore controller. The leading robot controllers of the OmniCore family are characterized by an energy saving potential of 20 percent on average and a high degree of future-proofing – thanks to integrated digital connectivity and over 1,000 scalable functions.

More information about Wizard Easy Programming is available here.

Variobot VariAnt: The Robot Ant

The presence of robots in our modern environment is getting increasingly casual to see. Robots are progressing rapidly in terms of both their capabilities and the potential uses they have. Examples of this include self-driving automobiles and drones. The VariAnt, a robot created by Variobot, is another amazing example.

VariAnt: At the First Glance

VariAnt, a robot ant, moves and acts almost exactly like its biological model. It independently explores its environment using a sensor system to detect obstructions or markers. The Variobot programmable kit is appropriate for researchers who are passionate and young at heart.

Advanced Autonomy

Like the majority of living things, the variAnt adjusts to the surroundings by detecting relative brightness. Using a network of patented sensors is made feasible. The autonomous robot ant has light sensors connected to its body, legs, antennae, and jaw claws that can be positioned as needed.

A processor is housed on an Arduino-compatible nano board, which serves as the ant robot’s central processing unit (CPU). The small control unit provides connections for two motors, 12  analog sensors,  8 digital I/Os,  2 programmed buttons, 2 reed switches for step numbers, that may be used in any way, and 15 status LEDs that can be plugged in and switched as needed.

The state of the sensors, motors, and reed switches may all be indicated by the LEDs. Inside the ant’s head is a tiny circuit board that is equipped with plug-in ports, which enables the flexible combination and extension of environmental sensors.

The lithium-ion battery that comes standard with the variAnt has a run time of around 3  hours and can be recharged using the provided USB cord.

The Walking Mechanism

The robotic ant makes use of these to identify objects, lines, light sources, or shadows in its surroundings, and then either follows them or stays away from them in an intentional manner.

The purpose of the walking mechanism that was created and patented by Variobot is to mimic the natural mobility of an ant as closely as possible. This is doable with only 24 different components made of acrylic.

VariAnt: Best for

For individuals of all ages, the robot ant is also an engaging and entertaining toy. You can use this set to design your own robot to behave, move, and appear like an actual, but much bigger, ant. The robot is an interesting thing to watch due to its distinct motions and behaviors, and due to its size, it can be used in a number of scenarios. The variAnt kit costs around €199.

Conclusion

The VariAnt might revolutionize robotics and our understanding of nature. Since it mimics ants, the VariAnt can perform many tasks that conventional robots cannot. Whether employed for research, environmental monitoring, or as a toy, the VariAnt is a groundbreaking robotics innovation that will captivate people worldwide.

Build Your Own Voice Assistant with CircuitMess Spencer: Your Talkative Friend

Voice assistants have become a crucial component of our everyday lives in today’s technologically sophisticated society. They assist us with work, respond to our inquiries, and even provide entertainment. Have you ever wondered how voice assistants operate or how to build your own? Spencer is here to satisfy your curiosity and provide a fun DIY activity, so stop searching. This blog post will introduce you to Spencer, a voice assistant that will brighten your day with jokes and provide you with all the information you need.

Meet Spencer

Spencer is a buddy that converses with you; it is more than simply a voice assistant. It can hear you well enough to comprehend all you say. It uses its large red button as a trigger to search the internet and give you straightforward answers. It’s a wonderful addition to your everyday routine because of Spencer’s endearing nature and capacity to make you grin.

Spencer’s Features: Your Interactive Voice Assistant Companion

1. Voice Interaction

High-quality audio communication is possible because of Spencer’s microphone. It comprehends your instructions, inquiries, and chats and offers a simple and straightforward approach for you to communicate with your voice assistant. Simply talk to Spencer, and it will answer as you would expect, giving the impression that you are conversing with a genuine friend.

2. Internet Connectivity and Information Retrieval

Spencer has internet access, allowing you to access a huge information base. You may have Spencer do a real-time internet search by pushing the huge red button on his chest. Spencer can search the web and provide you clear, succinct answers, whether you need to discover the solution to a trivia question, check the most recent news headlines, or collect information on a certain issue.

3. Personalization and Customization

Being wholly original is what Spencer is all about. You are allowed to alter its features and reactions to fit your tastes. Make Spencer reflect your style and personality by altering its external elements, such as colors, decals, or even adding accessories. To further create a genuinely customized experience, you may alter its reactions, jokes, and interactions to suit your sense of humor and personal tastes.

4. Entertainment and Engagement

Spencer is aware of how important laughing is to life. It has built-in jokes and amusing replies, so talking to your voice assistant is not only educational but also interesting and fun. Spencer’s amusing features will keep you entertained and involved whether you need a quick pick-me-up or want to have a good time with friends and family.

5. Learning and Educational STEM Experience

In particular, STEM (science, technology, engineering, and mathematics) subjects are the focus of Spencer’s educational mission. You will learn useful skills in electronics, soldering, component assembly, and circuits by making Spencer. To further develop Spencer’s talents, you may go into programming, gaining practical experience with coding and computational thinking.

6. Inspiration and Creativity

Spencer acts as a springboard to spark your imagination and motivate further investigation. You may let your creativity run wild as you put together and customize your voice assistant. This do-it-yourself project promotes critical thinking, problem-solving, and invention, developing a creative and innovative mentality that may go beyond the context of making Spencer.

Recommended Age Group

Spencer is intended for those who are at least 11 years old. While the majority of the assembly procedures are simple, some, like soldering and tightening fasteners, call for prudence. Never be afraid to seek an adult for help if you need it. When using certain equipment and approaches, it is usually preferable to be guided.

Assembly Time Required

The construction of Spencer should take, on average, 4 hours to finish. However, take in mind that the timeframe may change based on your prior knowledge and expertise. Don’t worry if you’re unfamiliar with electronics! Enjoy the process, take your time, and don’t let any early difficulties get you down. You’ll grow more used to the procedures as you go along.

Skills Required

To start this DIY project, no special skills are needed. Fun and learning something new are the key goals. Your introduction to the field of electronics via Building Spencer will pique your interest in STEM fields and provide you the chance to get hands-on experience. Consider completing this assignment as the first step towards a lucrative engineering career.

Pros and Cons of Spencer

Pros of Spencer

  • Spencer provides an engaging and interactive experience, responding to voice commands and engaging in conversations to make you feel like you have a real companion.
  • With internet connectivity, Spencer can retrieve information in real-time, giving you quick answers to your questions and saving you time.
  • Spencer can be customized to reflect your style and preferences, allowing you to personalize its appearance, responses, and interactions.
  • Spencer comes with built-in jokes and entertaining responses, adding fun and amusement to your interactions with the voice assistant.
  • Building Spencer provides hands-on learning in electronics, soldering, circuitry, and programming, offering a valuable educational experience in STEM disciplines.

Cons of Spencer

  • The assembly process of Spencer may involve technical aspects such as soldering and component assembly, which can be challenging for beginners or individuals with limited experience.
  • Spencer heavily relies on internet connectivity to provide real-time answers and retrieve information, which means it may have limited functionality in areas with poor or no internet connection.
  • While Spencer offers basic voice assistant features, its capabilities may be more limited compared to advanced commercially available voice assistant devices.

Conclusion

Spencer, creating your own voice assistant is a fascinating and worthwhile endeavor. You’ll learn useful skills, expand your understanding of electronics, and enjoy the thrill of putting a complicated gadget together as you go along with the assembly process. Remember that the purpose of this project is to experience the thrill of learning, solving problems, and letting your imagination run free as well as to produce a final product. So be ready to join Spencer on this journey and discover a world of opportunities in the exciting world of voice assistants.

Get your own Spencer Building kit here: bit.ly/RobotsBlog

Web-based VEXcode EXP

VEXcode EXP is now available in a web-based version for Chrome browsers. The web-based version can be reached by navigating to codeexp.vex.com and contains all of the features and functionality of VEXcode EXP, but without the need to download or install anything! The new web-based version of VEXcode makes it easier for teachers and students to access projects from anywhere, at any time, on any device – including Chromebooks!

In addition to the built-in Help and Tutorials, the STEM Library contains additional resources and support for using web-based VEXcode EXP. Within the STEM Library you can find device-specific articles for connecting to web-based VEXcode EXP, loading and saving projects, updating firmware, and more. View the VEXcode EXP section of the STEM Library to learn more.

Web-based versions of VEXcode IQ and VEXcode V5 are in the works and will be available soon.

Lego Mindstorms Robot Inventor/Spike Prime (51515/45678) Adapterplatine für den Ultraschallsensor

Gastbeitrag von brickobotik:

Der SPIKETM Prime von LEGO Education ist inzwischen seit über einem Jahr auf dem Markt. Wir haben ihn euch in unserem großen Test ausführlich vorgestellt. Inzwischen ist auch der Inventor 51515 also die Home-Variante des SPIKETM Prime erhältlich. Bei beiden ist die Software nun auf einem adäquaten Level angekommen. Inzwischen haben wir auch unser E-Book zur SPIKETM Prime Classroom-Software veröffentlicht, das für alle, die noch Fragen zur Programmierung der Roboters haben, definitiv einen Blick wert ist.

Für uns bei brickobotik geht die Arbeit mit dem SPIKETM Prime aber trotzdem weiter. Zum einen natürlich in unseren Workshops und Fortbildungen, die wir zu diesem Roboter durchführen. Aber auch die Elektrotechnik des SPIKETM beschäftigt uns. Deshalb geben wir euch in diesem Artikel einen kleinen Einblick in unsere „brickobotik-Bastelstube“ und stellen ein Projekt vor, an dem wir gerade arbeiten.


Viele von euch ist sicher aufgefallen, dass der Ultraschallsensor von SPIKETM Prime und Mindstorms Inventor im Gegensatz zu den anderen Sensoren auf seiner Rückseite zwei Torx-Schrauben zeigt. Wenn man diese herausschraubt, kann man die weiße Sensoreinheit des Ultraschallsensors entfernen und hält dann nur die schwarze Schale in der Hand. Darin kommt das Kabel des LEGO Powered-Up-Steckers an und wird auf eine Buchsenleiste verteilt.

Diese Buchsenleiste (es handelt sich um einen 8-Pin Female Header) ist mit einem Rastermaß von 1,27 mm sehr klein und es kann deshalb ziemlich fummelig werden, sie mit herkömmlichen Arduino-Kabeln zu nutzen. Darum haben wir eine passende Adapterplatine entwickelt, welche die kleine Buchsenleiste auf das typische Rastermaß von 2,54 mm übersetzt, wie man es vom Arduino, Steckbrettern, Lochrasterplatinen, etc. kennt.

Technische Details zur Platine

Die Power-Funotions-2.O-Verbindung führt sechs Kontakte:

1X 3,3 V Spannungsversorgung
1X GND
2 digitale Ein-/Ausgänge (GPIO), welche auch für UART (115200 Baud, 8N1) verwendet werden können.
Achtung! Die GPlOs liefern nicht genug Strom, um LEDs direkt zu betreiben! Es wird eine Transistorschaltung benötigt, um eine LED aus der 3,3 V Spannungsversorgung zu speisen.

2x PWM für Motoren

Achtung! Die Spannung dieser Signale kommt direkt vom Akku des SPIKETM Prime! Diese liegt nach unseren Messungen zwischen 8,4 V und 6,3 V.

Für die GPlO-Kontakte ist auf der Platine je ein Widerstand vorgesehen, welcher einen minimalen Schutz gegen falsche GPlO-Konfigurationen darstellt. Sie können aber auch einfach überbrückt werden.

Nach links und rechts sind die gleichen Kontakte noch einmal ausgeführt. So sind auf der einen Seite der Platine die GPlO-Kontakte mit Spannungsversorgung ausgeführt und auf der anderen Seite die PWM-Kontakte mit Spannungsversorgung – und zwar sowohl im Rastermaß 2,54 mm als auch im Rastermaß 2,00 mm für das Grove-Stecksystem. Für die Kontakte links und rechts ist die Spannungsversorgung von 3,3 V durch eine offene Lötbrücke unterbrochen, damit zum Beispiel bei Verwendung eines Calliope mini die unabhängigen Spannungsversorgungen beider Geräte nicht zerstörerisch konkurrieren. Die offene Lötbrücke kann bei Bedarf mit etwas Lötzinn geschlossen werden.

Neue Möglichkeiten durch die Platine

Mit der Platine ist es deutlich einfacher, weitere Sensoren oder Motoren anzuschließen und mit dem SPIKETM Prime zu nutzen. Auch eine Verbindung zu Mikrocontrollern wie dem Calliope mini ist möglich. Aber es gibt eine wichtige Einschränkung: Solche Projekte sind eher für fortgeschrittene Nutzer*innen geeignet. Sowohl die Verdrahtung als auch das Programmieren erfordern Erfahrung mit der Elektronik und den entsprechenden Sensorprotokollen.

Technische Details zur Ansteuerung

Das direkte Ansteuern der Kontakte funktioniert über die SPIKETM-Prime-App, allerdings nur in Python-Projekten und auf eigene Faust. Es gibt kein von LEGO gestelltes „UltrasonicBreakout“ Python-Modul o.ä. Beschreibungen und Anleitungen zur den entsprechenden Micropython-Klassen und -Methoden kursieren jedoch im Internet. Wer Erfahrung mit anderen Micropython-Geräten, speziell der Bedienung der Micropython-REPL, mitbringt, kann hier schnell Fuß fassen.

Bestellt eure eigene Adapterplatine!

Wir werden bei brickobotik mit der Platine weiterarbeiten, um die Verbindung mit verschiedenen Sensoren zu testen. Allen Bastler*innen, die jetzt Lust bekommen haben, ebenfalls mit Verbindungen zum SPIKETM Prime zu experimentieren, möchten wir die Möglichkeit geben, unsere Adapterplatine dafür zu nutzen. Wenn ihr also Interesse an der beschriebenen Platine habt und sie über uns erwerben wollte, dann schreibt uns eine E-Mail an [email protected]. Wir sammeln die Anfragen und wenn genügend Interessent*innen zusammenkommen, geben wir euch per Mail Bescheid, sobald die Platine vorbestellbar ist. Du willst nicht selbst basteln, bist aber interessiert an einem bestimmten Sensor, den man mit dem SPIKETM Prime verbinden könnte? Dann besuch uns auf www.brickobotik.de und lass uns einen Kommentar oder eine Nachricht mit deinen Wünschen da. Wir werden versuchen, sie für kommende Projekte zu berücksichtigen

Wandelbots – No-Code Robotics – – Short Interview

Sebastian from Robots-Blog was able to do a short interview with Annelie Harz from Wandelbots. Learn in the interview what Wandelbots is and why programming might soon become obsolete.

Robots-Blog: Who are you and what is your job at Wandelbots?

Annelie: My name is Annelie and I work as a marketing manager at Wandelbots.

Robots Blog: Which robot from science, movies or TV is your favorite?

Annelie: Wall-E, actually. A little robot that does good things and is just adorable.

Robots Blog: What is Wandelbots and where does the name come from?

Annelie: The name describes the CHANGE (german: „Wandel“) of RoBOTics. Because that is exactly what we do. We enable everyone to handle robots, which today is only reserved for a small circle of experts. Our long-term company vision is: „Every robot in every company and every home runs on Wandelbots“. And that promises big change on a wide variety of levels – starting for us with industry.

Robots Blog: Who is your product aimed at and what do I need for it?

Annelie: Our product is currently aimed at customers from industry. Here, our software – Wandelbots Teaching – can help with programming various applications such as welding or gluing without having to write a line of code. It is designed to be so simple and intuitive that really anyone can work with it to teach a robot a desired result. This works through the interaction of an app and an input device, the TracePen. This takes the form of a large pen with which users can draw a desired path for the robot on the component. But we also work together with educational institutions. They are the ones who train the next generation of robot experts. And in the long term, we are convinced – and this is already part of our vision – that robots will also find their way into private life as little helpers.

Robots-Blog: What feature is particularly worth mentioning?/What can’t anyone else do?

Annelie: Our product works robot manufacturer independent. In robotics, each manufacturer has developed its own proprietary programming language over the years. This makes communication between humans and machines very difficult. We, on the other hand, want to create a tool that allows any human to work with any robot – completely independent of programming language and manufacturer. Robotics should be fun for the user of our product. Thanks to the high usability and the operation of our app via iPad, this is already possible today. And over the next time, application-specific editions will be added to our platform – currently, for example, we are working on an app version for robot welding.

Robots Blog: Do I still need to learn programming at all?

Annelie: No. As I just explained, with this so-called no-code technology, you don’t need to learn programming anymore. It is simple, intuitive and user-friendly, even for laymen. Of course, you always need to have some basic understanding of robotics, especially for safety reasons. You should never underestimate the dangers posed by robots, which is why our product always works according to the respective manufacturer-specific safety specifications.

Robots Blog: What robots are supported? I have a Rotrics DexArm and an igus Robolink DP-5; can I use those as well?

Annelie: Of course, shortly after entering the market, we first want to make robotics in the industry, for example the automotive sector, more flexible and easier. To do this, we are gradually integrating the largest robot brands into our platform. We will certainly also integrate smaller robot brands that cover one or more niches. Or – even better – thanks to our Robot Integration Software Development Kit, robot manufacturers will soon be able to do it themselves.

Robots Blog: How much does your product cost?

Annelie: Our product is offered via a licensing model as a subscription, as is common in the Software as a Service business, or also classically for purchase. The current prices for the different editions can be found on our website (and you will certainly find more exciting content there)

Certification as a professional in image processing by Eye Vision Technology


Image processing is a complex and very extensive topic. In order to be able to use the multitude of different application possibilities and functions optimally, EVT has been offering training courses on various topics of image processing for several years. The participants will learn how to use it correctly, as well as the numerous functions and possible uses of the innovative EyeVision software.

EVT now also offers the first free certification program in addition to free knowledge sharing. The webinar participants can participate and benefit from the advantages. After successfully completing a test that is independent of time and location, the participants receive a certificate and are allowed to bear the title “certified Eye Vision Technology professional in image processing”. The certification comes with numerous advantages, such as saving 10 percent with every order via EVT, the permission to use prioritized support via an exclusive acceptance point and an entry as a certified professional in image processing on the highly frequented Eye Vision homepage.

Certification not only benefits companies, but also customers. Because the certificate enables transparency about the knowledge of the person responsible in the field of image processing and the use of image processing software.

You can find out more about the criteria and registration for the free certification program at www.evt-web.com.

Boston Dynamics expands Spot® product line

NEW SELF-CHARGING ENTERPRISE ROBOT, REMOTE OPERATION SOFTWARE, AND ROBOT ARM ENHANCE SPOT’S CAPABILITIES FOR AUTONOMOUS SITE MONITORING


Waltham, MA – February 2, 2021 – Boston Dynamics, the global leader in mobile robotics, today announced an expanded product line for its agile mobile robot Spot. The new products include a self-charging Enterprise Spot, web-based remote operations software, Scout, and the Spot Arm. These additions extend Spot’s ability to perform autonomous, remote inspections and data collection, and enable the robot to perform manual tasks.

With more than 400 Spots out in the world, the robot has successfully undertaken hazardous tasks in a variety of inhospitable environments such as nuclear plants, offshore oil fields, construction sites, and mines. Customers have leveraged Spot’s advanced mobility, autonomy, control, and customizability to improve operational efficiency, enhance worker safety, and gather critical data. Spot’s new products are designed to enable customers to fully operationalize continuous, autonomous data collection on remote or hazardous worksites of any size, from anywhere they have access to their network.

Autonomy is critical to enhancing Spot’s value. In order to support long, remote deployments, Boston Dynamics is introducing Spot Enterprise, a new version of Spot that comes equipped with self-charging capabilities and a dock, allowing it to perform longer inspection tasks and data collection missions with little to no human interaction. In addition to the basic capabilities that the base Spot robot offers, Spot Enterprise leverages upgraded hardware for improved safety, communications, and behavior in remote environments. These upgrades expand the range that autonomous missions can cover, extend WiFi support, add flexibility to Spot’s payload ports, and enable users to quickly offload large data sets collected during the robot’s mission.

Pivotal to refining Spot’s value at scale is remote operation. Scout is Boston Dynamics’ web-based software that enables operators to control their fleet of Spots from a virtual control room. Operators can use Scout to take Spot anywhere a person could go on-site, allowing them to inspect critical equipment or hazardous areas from afar. The software is designed with a simple user interface to run pre-programmed autonomous missions or manually control the robot, to perform various tasks such as walking or posing the robot to capture images and thermal data of obscured gauges or pipes using the Spot CAM+IR thermal imaging payload.

Combined, the Spot Enterprise robot equipped with a Spot CAM+IR thermal imaging payload, Scout software, and Boston Dynamics’ premium support now create an out-of-the-box solution for asset-intensive environments. Operators can deploy this solution on site to proactively maintain and manage assets while maximizing worker uptime and improving worker safety.

In addition to launching products designed to make remote inspection safer and easier, Boston Dynamics is also releasing the Spot Arm, which enables users to act on data insights and perform physical work in human-centric environments. The arm is equipped to operate through both semi-autonomous actions and telemanipulation. It can manually or semi-autonomously grasp, lift, carry, place, and drag a wide variety of objects. It is also capable of manipulating objects with constrained movement and can open and close valves, pull levers and turn handles and knobs in coordination with its body to open standard push and pull doors.

“Since first launching Spot, we have worked closely with our customers to identify how the robot could best support their mission critical applications,” said Robert Playter, CEO of Boston Dynamics. “Our customers want reliable data collection in remote, hazardous, and dynamic worksites. We developed the new Spot products with these needs in mind, and with the goal of making it easy to regularly and remotely perform critical inspections, improving safety and operations.”

Interested parties can purchase Spot Enterprise, Scout, and the Spot Arm via Boston Dynamics’ sales team. For more information on these new offerings, please visit: www.bostondynamics.com.



About Boston Dynamics

Boston Dynamics is the global leader in developing and deploying highly mobile robots capable of tackling the toughest robotics challenges. Our core mission is to lead the creation and delivery of robots with advanced mobility, dexterity and intelligence that add value in unstructured or hard-to-traverse spaces and positively impact society. We create high-performance robots equipped with perception, navigation and intelligence by combining the principles of dynamic control and balance with sophisticated mechanical designs, cutting-edge electronics and next-generation software. We have three mobile robots in our portfolio – Spot®, Handle™ and Atlas® – as well as Pick™, a computer vision-based robotics solution for logistics. Founded in 1992, Boston Dynamics spun out of the MIT Leg Lab and is one of Inc. Magazine’s Best Workplaces of 2020. For more information on our company and its technologies, please visit www.bostondynamics.comhttp://www.bostondynamics.com.

Blaize Delivers First Open and Code-free AI Software Platform Spanning the Entire Edge AI Application Lifecycle


El DORADO HILLS, CA — December, 2020 — Blaize today fully unveiled the Blaize AI Studio offering, the industry’s first open and code-free software platform to span the complete edge AI operational workflow from idea to development, deployment and management. AI Studio dramatically reduces edge AI application deployment complexity, time, and cost by breaking the barriers within existing application development and machine learning operations (MLOps) infrastructure that hinder edge AI deployments. Eliminating the complexities of integrating disparate tools and workflows, along with the introduction of multiple ease-of-use and intelligence features, AI Studio reduces from months to days the time required to go from models to deployed production applications.



“While AI applications are migrating to the Edge with growth projected to outpace that of the Data Center, Edge AI deployments today are complicated by a lack of tools for application development and MLOps,” says Dinakar Munagala, Co-founder and CEO, Blaize. “AI Studio was born of the insights to this problem gained in our earliest POC edge AI hardware customer engagements, as we recognized the need and opportunity for a new class of AI software platform to address the complete end-to-end edge AI operational workflow.”



“AI Studio is open and highly optimized for the AI development landscape that exists across heterogeneous ecosystems at the edge,” says Dmitry Zakharchenko, VP Research & Development, Blaize. “With the AI automation benefits of a truly modern user experience interface, AI Studio serves the unique needs in customers’ edge use cases for ease of application development, deployment, and management, as well as broad usability by both developers and domain expert non-developers.”



The combination of AI Studio innovations in user interface, use of collaborative Marketplaces, end-to-end application development, and operational management, collectively bridge the operational chasm hindering AI edge ROI. Deployed with the Blaize AI edge computing hardware offerings that address unserved edge hardware needs, AI Studio makes AI more practical and economical for edge use cases where unmet application development and MLOps needs delay the pace of production deployment.



“In our work for clients, which may include developing models for quality inspection within manufacturing, identifying stress markers to improve drug trials or even predicting high resolution depth for autonomous vehicles, it is vital that businesses can build unique AI applications that prove their ideas quickly,” says Tim Ensor, Director of AI, Cambridge Consultants. “AI Studio offers innovators the means to achieve this confidence in rapid timeframes, which is a really exciting prospect.” Cambridge Consultants, part of Capgemini Group, helps the world’s biggest brands and most ambitious businesses innovate in AI, including those within the Blaize ecosystem.

Code-free assistive UI for more users, more productivity
The AI Studio code-free visual interface is intuitive for a broad range of skill levels beyond just AI data scientists, which is a scarce and costly resource for many organizations. “Hey Blaize” summons a contextually intelligent assistant with an expert knowledge-driven recommendation system to guide users through the workflow. This ease of use enables AI edge app development for wider teams from AI developers to system builders to business domain subject matter experts.

Open standards for user flexibility, broader adoption
With AI Studio, users can deploy models with one click to plug into any workflow across multiple open standards including ONNX, OpenVX, containers, Python, or GStreamer. No other solution offers this degree of open standard deployment support, as most are proprietary solutions that lock in users with limited options. Support for these open standards allows AI Studio to deploy to any hardware that fully supports the standards.



Marketplaces collaboration
Marketplace support allows users to discover models, data and complete applications from anywhere – public or private – and collaborate continuously to build and deploy high-quality AI applications.

AI Studio supports open public models, data marketplaces and repositories, and provides connectivity and infrastructure to host private marketplaces. Users can continually scale proven AI edge models and vertical AI solutions to effectively reuse across enterprises, choosing from hundreds of models with drag and drop ease to speed application development



Easy-to-Use application development workflow:
The AI Studio model development workflow allows users to easily train and optimize models for specific datasets and use cases, and deploy quickly into multiple formats and packages. With the click of a button, AI Studio’s unique Transfer Learning feature quickly retrains imported models for the user’s data and use case. Blaize edge-aware optimization tool, NetDeploy, automatically optimizes the models to the user’s specific accuracy and performance needs. With AI Studio, users can easily build and customize complete application flows other than neural networks, such as image signal processing, tracking or sensor fusion functions.



Ground-breaking edge MLOps/DevOps features
As a complete end-to-end platform, AI Studio helps users deploy, manage, monitor and continuously improve their edge AI applications. Built on a cloud-native infrastructure based on microservices, containers and Kubernetes, AI Studio is highly scalable and reliable in production.



Blaize AI Studio Early Adopter Customers Results
In smart retail, smart city and industry 4.0 markets, Blaize customers are realizing new levels of efficiency in AI application development and deployment using AI Studio. Examples include:

– Complete end-to-end AI development cycle reduction from months to days
– Reduction in training compute by as much as 90%

– Edge-aware efficient optimizations and compression of models with a < 3% accuracy drop

– New revolutionary contextual conversational interfaces that eclipse visual UI



Availability
AI Studio is available now to qualified early adopter customers, with general availability in Q1 2021. The AI Studio product offering includes licenses for individual seats, enterprise, and on-premise subscriptions, with product features and services suited to the needs of each license type.



About Blaize


Blaize leads new-generation computing unleashing the potential of AI to enable leaps in the value technology delivers to improve the way we all work and live. Blaize offers transformative computing solutions for AI data collection and processing at the edge of network, with focus on smart vision applications including automobility, retail, security, industrial and metro. Blaize has secured US$87M in equity funding to date from strategic and venture investors DENSO, Daimler, SPARX Group, Magna, Samsung Catalyst Fund, Temasek, GGV Capital, Wavemaker and SGInnovate. With headquarters in El Dorado Hills (CA), Blaize has teams in Campbell (CA), Cary (NC), and subsidiaries in Hyderabad (India), Manila (Philippines), and Leeds and Kings Langley (UK), with 300+ employees worldwide.