According to Forrester’s Infrastructure Cloud Survey in 2023, 79% of about 1,300 enterprise cloud decision-makers surveyed said their organizations are implementing private clouds. Additionally, IDC forecasts that global spending on private, dedicated cloud services, including hosted private clouds, will hit $20.4 billion in 2024 and will at least double by 2027.
In addition, global spending on enterprise private cloud infrastructure, including hardware, software, and support services, will be $51.8 billion in 2024 and grow to $66.4 billion in 2027, according to IDC. Of course, public cloud providers are still the 800-pound gorilla in the room. Public clouds, including the big three AWS, Microsoft, and Google, are expected to rake in $815.7 billion in 2024.
The next version of the Ruby programming language, Ruby 3.4.0, has been released in preview, bringing changes for string literals and class updates.
Unveiled May 16, the Ruby 3.4.0 preview is downloadable from ruby-lang.org. With this update, string literals in files without a frozen_string_literal comment now behave as if they were frozen. If mutated, a deprecation warning is emitted. The change marks a first step toward making frozen string literals the default in Ruby. Frozen or immutable strings offer both performance and safety advantages.
IBM is expanding Qiskit, its quantum computing software, into a comprehensive software stack that includes middleware, serverless building blocks, and generative AI coding assistance. The company says the platform for building, optimizing, and executing programs on IBM quantum systems will also deliver better performance.
Announced May 15, the intiative builds on the Qiskit SDK 1.x, combining a stable software development kit and a portfolio of services for running complex quantum circuits on 100+ qubit IBM quantum computers. IBM said the expansion will enable members of the IBM Quantum Network to discover the next generation of quantum algorithms in their respective domains. The expansion of Qiskit includes more than 100 releases from its origins as a research tool built to study the inner workings of quantum computers.
Java's equals() and hashcode() are two methods that work together to verify if two objects have the same value. You can use them to make object comparisons easy and efficient in your Java programs.
Java equals() and hashcode()In this article, you'll learn:
You'll also get:
Without equals() and hashcode(), we would have to compare every field from an object. The code would be very confusing and hard to read. Using the equals() and hashcode() methods together leads to more flexible and cohesive code.
Some time ago I wrote about the work Microsoft was doing to improve the Azure APIs. That project delivered a set of automatically generated API definitions and SDKs, making it easier to link your applications to the cloud and to manage Azure services using code.
Behind the scenes was a new language Microsoft developed called CADL, the Concise API Design Language. Building on concepts from both TypeScript and Bicep, CADL allowed you to define and describe APIs in a way that made it easy to use code to define API operations and then compile the result as an OpenAPI definition. It also let you define guardrails and common definition standards as libraries, helping architects and developers collaborate on API designs. CADL was a step up in API design, able to produce a 500-line OpenAPI definition in only 50 lines of code.
In May 1974, Donald Chamberlin and Raymond Boyce published a paper on SEQUEL, a structured query language that could be used to manage and sort data. After a change in title due to another company’s copyright on the word SEQUEL, Structured Query Language (SQL) was taken up by database companies like Oracle alongside their new-fangled relational database products later in the 1970s. The rest, as they say, is history.
SQL is now 50 years old. SQL was designed and then adopted around databases, and it has continued to grow and develop as a way to manage and interact with data. According to Stack Overflow, it is the third most popular language used by professional programmers on a regular basis. In 2023, the IEEE noted that SQL was the most popular language for developers to know when it came to getting a job, due to how it could be combined with other programming languages.
When we set out to rebuild the engine at the heart of our managed Apache Kafka service, we knew we needed to address several unique requirements that characterize successful cloud-native platforms. These systems must be multi-tenant from the ground up, scale easily to serve thousands of customers, and be managed largely by data-driven software rather than human operators. They should also provide strong isolation and security across customers with unpredictable workloads, in an environment in which engineers can continue to innovate rapidly.
Google has updated both its Flutter multiplatform application development framework and the accompanying Dart language. In making these updates, the company stressed the addition of the WebAssembly bytecode instruction format as a compilation target for web apps built with Flutter and Dart. The announcement follows recent reports of Google laying off staff from the Dart and Flutter teams.
Without variables, programming languages are next to useless. Fortunately, JavaScript's variable system is incredibly powerful and versatile. This article shows you how to use JavaScript variables to store numbers, text strings, objects, and other data types. Once you've stored this information, you can use it anywhere in your program.
All about JavaScript variablesHere's what you'll learn in this article:
All JavaScript programming happens in an environment like a web browser, Node, or Bun.js. Each of these environments has its own set of pre-defined variables like window and console. These variables are not user-defined because they are set by the environment. Another kind of variable is the user-defined variable defined by other developers, such as in third-party frameworks or libraries you use. Then there are variables you create while writing your programs, using the let and const keywords. These are defined by you, the user. This article is about how to create your own user-defined variables.
Most people assume that analytical databases, or OLAPs, are big, powerful beasts—and they are correct. Systems like Snowflake, Redshift, or Postgres involve a lot of setup and maintenance, even in their cloud-hosted incarnations. But what if all you want is "just enough" analytics for a dataset on your desktop? In that case, DuckDB is worth exploring.
Columnar data analytics on your laptopDuckDB is a tiny but powerful analytics database engine—a single, self-contained executable, which can run standalone or as a loadable library inside a host process. There's very little you need to set up or maintain with DuckDB. In this way, it is more like SQLite than the bigger analytical databases in its class.
Google has expanded on its Gemma family of AI models, introducing the PaliGemma vision-language model (VLM) and announcing Gemma 2, the next generation of Gemma models based on a new architecture. The company also released the LLM Comparator in open source, an addition to its Responsible Generative AI Toolkit.
Angular 18, the next planned release of Google’s TypeScript-based web app development framework, is due to arrive on May 22, with features such as deferrable views and declarative control flow moving out of developer preview to a stable stage.
Deferrable views, which are also known as @defer blocks, can be used in component templates to defer the loading of select dependencies within the template, thus reducing the initial bundle size of the application. Declarative control flow is a new built-in syntax for control flow that brings functionality such as NgIf, NgFor, and NgSwitch into the framework itself (as @if, @for, and @switch respectively), allowing developers to conditionally show, hide, and repeat elements.
The innovation hub of RSAC 2024, the RSAC Early Stage Expo was specifically designed to showcase emerging players in the information security industry. Among the 50 exhibitors crammed into the second floor booth space, seven VC-backed up-and-comers in application security and devsecops caught our eye.
AppSentinelsAppSentinels touts itself as a comprehensive API security platform, covering the entire application life cycle. The product conducts thorough analyses of the application’s activities and examines its workflows in detail. Once the AppSentinals product understands the workflows, it can test the workflows against a variety of potential flaws, and use this information to also protect against complex business logic attacks in production environments.
Oracle in its Spring 2024 roadmap for Java SE (Standard Edition) reconfirmed it will extend support for Java 11 through January 2032, and will support Java 8 and Java 11 on the Solaris operating system until at least December 2030 and January 2032 respectively.
The Java SE Spring 2024 roadmap update, published May 13, also notes the company’s continued commercial support of JavaFX and its planned sunsetting of the Advanced Management Console (AMC) after October 2024. AMC users should be migrating to Java Management Service (JMS), Oracle said.
As most IT people know, GPUs are in high demand and are critical for running and training generative AI models. The alternative cloud sector, also known as microclouds, is experiencing a significant surge. Businesses such as CoreWeave, Lambda Labs, Voltage Park, and Together AI are at the forefront of this movement. CoreWeave, which started as a cryptocurrency mining venture, has become a major provider of GPU infrastructure.
This shift illustrates a broader trend in which companies are increasingly relying on cloud-hosted GPU services, mainly due to the high cost and technical requirements of installing and maintaining the necessary hardware on-site. Since public cloud providers are not discounting these computing services, microclouds provide a better path for many enterprises.
The hype and awe around generative AI have waned to some extent. “Generalist” large language models (LLMs) like GPT-4, Gemini (formerly Bard), and Llama whip up smart-sounding sentences, but their thin domain expertise, hallucinations, lack of emotional intelligence, and obliviousness to current events can lead to terrible surprises. Generative AI exceeded our expectations until we needed it to be dependable, not just amusing.
“AI models currently shine at helping so-so coders get more stuff done that works in the time they have,” argues engineer David Showalter. But is that right? Showalter was responding to Santiago Valdarrama’s contention that large language models (LLMs) are untrustworthy coding assistants. Valdarrama says, “Until LLMs give us the same guarantees [as programming languages, which consistently get computers to respond to commands], they’ll be condemned to be eternal ‘cool demos,’ useless for most serious applications.” He is correct that LLMs are decidedly inconsistent in how they respond to prompts. The same prompt will yield different LLM responses. And Showalter is quite possibly incorrect: AI models may “shine” at helping average developers generate more code, but that’s not the same as generating usable code.
Back in 2014, when the wave of containers, Kubernetes, and distributed computing was breaking over the technology industry, Torkel Ödegaard was working as a platform engineer at eBay Sweden. Like other devops pioneers, Ödegaard was grappling with the new form factor of microservices and containers and struggling to climb the steep Kubernetes operations and troubleshooting learning curve.
As an engineer striving to make continuous delivery both safe and easy for developers, Ödegaard needed a way to visualize the production state of the Kubernetes system and the behavior of users. Unfortunately, there was no specific playbook for how to extract, aggregate, and visualize the telemetry data from these systems. Ödegaard’s search eventually led him to a nascent monitoring tool called Graphite, and to another tool called Kibana that simplified the experience of creating visualizations.
Back in the ancient days of machine learning, before you could use large language models (LLMs) as foundations for tuned models, you essentially had to train every possible machine learning model on all of your data to find the best (or least bad) fit. By ancient, I mean prior to the seminal paper on the transformer neural network architecture, “Attention is all you need,” in 2017.
Red Hat is extending its Lightspeed generative AI technology to work with the company’s Red Hat OpenShift hybrid cloud application platform as well as with Red Hat Enterprise Linux (RHEL).
Announced May 7, Red Hat OpenShift Lightspeed and Red Hat Enterprise Linux Lightspeed will offer intelligent, natural language processing capabilities, intended to make OpenShift and RHEL easier for novices to use and more efficient for experienced professionals, Red Hat said. Red Hat OpenShift Lightspeed is slated for availability in late 2024. Red Hat Enterprise Linux is in the planning stage.