35 Best Free and Open Source Software for Windows 11
The Hidden History Of Windows Server
Just over thirty years ago, as the PC revolution was starting to sweep across the business-scape, an operating system was launched to manage office computer networks. Today, that same operating system is still in use, except it's now helping companies run operations across continents from cloud servers that might be thousands of miles away.
Its name is Windows Server, and it has taken on many different roles over its long lifetime. From ecommerce to remote working, its use cases have evolved with the maturation of the digital economy. To Erin Chapple, who has worked on the product for more than two decades, what's especially interesting about Windows Server is not just the twists and turns of its development, but how those shifts have shaped the way companies operate. "To me, it's exciting working in platforms where you can put a technology forward that changes the way in which a community or a business does their work," says Chapple, who is now Corporate Vice President for Azure Infrastructure at Microsoft. We spoke to her and her colleagues to tell the story of how Windows Server got here, and where it could go next…
Strong foundations
There's not much about the IT world of the early 1990s that's still recognizable today. The internet was almost unheard of, there were no such things as smartphones, and even home computers were only just starting to go mainstream.
In business, however, PCs were becoming regular fixtures on office desks. As they proliferated, companies needed a way to securely manage all those computers, and to network them for ease of file sharing and printing. The solution was a new kind of PC called a "server" that could act as a central hub for the network, providing those functionalities to its "client" workstations. When Windows Server arrived in the guise of Windows NT back in 1993, it enabled many businesses to run their own server hardware for the first time.
What's extraordinary is that many of the technologies and concepts created 30 years ago for Windows NT have stood the test of time. "Those technologies delivered back then are still in use today—and they're still used in all aspects," says Jeff Woolsey, Principal Program Manager for Windows Server at Microsoft. Woolsey believes there was one quality in particular that laid the path for Windows Server's longevity. "When you look at Windows NT, there was this notion of reliability," he says. "It's something that we totally take for granted now. But 30 years ago, guess what? It was common for a bad-acting application to not only crash itself, but very often take down the operating system, too." Windows NT introduced protected memory, meaning one wayward application didn't crash the entire system. "It gave us preemptive multitasking, it gave us all of these fundamental core technologies around reliability so that if an app crashes, your other applications and your operating system continues to run." These technologies have since migrated into consumer devices, such as laptops and even Xbox games consoles, and now touch billions of users.
Woolsey credits the founding engineers of Windows Server for establishing another key characteristic: Its ability to bend to the needs of the IT professional running the system. Instead of requiring administrators to seek out third-party apps to handle particular tasks, Windows Server offered these utilities as part of the operating system itself. Server administrators could pick what tools they wanted to install—like diners picking dishes from a menu—allowing the server to take on different "roles" in response to particular needs. And this menu evolved. As the internet took off in 1995, for example, Internet Information Services (IIS) was added to the operating system to help companies host their own websites.
"When you deploy Windows Server, I like to think of it as a blank slate," says Woolsey. "The first thing you must decide is: What is its role? What is its job? Is it a file server? A print server? A web server? An application server?"
The datacenter decade
With the PC era in full swing around the turn of the millennium, the economy became increasingly digitized, necessitating ever more computing capacity. Many businesses came to host their servers in datacenters—buildings stacked floor-to-ceiling with racks of computer hardware.
With this huge scaling up of operations came increased complexity. Greater workloads demanded judicious load-balancing so that user performance didn't suffer; security threats were growing as cyberattacks started to become serious business; and the now-universal demand for internet access required careful network management.
It was during this era, in 2006, that Microsoft created PowerShell. It was a key inflection point in the story of Windows Server. The program automates tasks such as installing security updates or running backups, freeing IT workers from some of the mundane day-to-day tasks they would previously have done by hand, and allowing them to focus more on overall strategy.
Erin Chapple has a keen sense of its impact. "I remember being on the street at [the Microsoft conference] TechEd in New Orleans in 2010," she says. "A PowerShell customer came up and stopped Jeffrey Snover, who was the mastermind behind PowerShell, and said: 'You've changed my life in the sense that I can now earn a greater living, and I can have more direct impact on the business, and my satisfaction with my job is higher'."
Cruising into the cloud era
If the early 2000s were about companies operating their own datacenters, the 2010s onwards saw a shift to the cloud, with businesses tending to rent remote server space rather than bearing the huge overheads of running their own infrastructure. This also enabled them to deploy software more easily and opened the door to device-agnostic remote working.
That didn't spell the end of the Windows Server story, though. It's still there as the backbone of Microsoft's Azure cloud offering—only the physical location of the servers has changed. But the way it fits into corporate systems has evolved: Many businesses are now taking a hybrid approach—they still run some of their own server equipment, but now marry that with the ample resources that the cloud can offer.
Refining the product for this hybrid approach required Microsoft to pay close attention to customer comments. Chapple recalls how Microsoft initially decided to offer a cloud-based interface to manage on-premises servers. "Well, it turns out that customers weren't necessarily ready to make that switch," says Chapple, referring to feedback received in 2016. "We thought they would manage Windows Server from the cloud, when really they wanted to keep Windows servers on premises, they wanted to manage from on premises."
In response, Microsoft released what became Windows Admin Center as an on-premises management tool, and "it had one of the fastest growths of any management tool I'm aware of," says Chapple. Years later, in 2019, with a product called Azure Arc, Microsoft provided a single platform to manage both cloud and on-premises servers, "and the whole thing has come full circle," says Chapple. "It's really exciting to see customers innovating across that spectrum."
This hybrid set-up can prove invaluable. Chapple cites the example of a cruise line that moved its server infrastructure into the Azure cloud, but also still uses local server equipment on its ships as they sail around the world, often in places where connectivity is limited. "That lets them gain agility and reliability at the edge, on the shore, or at sea," says Chapple. "But by adopting Azure, they're also able to manage their apps and track the company's ships and vital signals as they travel around the world."
Moving into the AI era
We're currently at the very beginning of what some predict will engender the next big platform shift in computing: The AI era.
Microsoft has been one of the companies investing heavily in generative AI; its partnership with OpenAI has led to the range of Copilot AI assistants that can help customers code apps, analyze their own business data, or improve the security of their cloud environments. "And, of course, with Azure AI, we're making it possible for customers to build their own Copilots too, if they want to do that," says Jeff Woolsey. If you're a retailer, for example, your server could run a Copilot chatbot, trained on your own catalogs or product manuals, to answer customer queries about your products.
Many Windows Server customers, from US retailers to centuries-old European postal services are beginning to reimagine their businesses with AI. So what future developments in AI could impact their experience of running their servers? One area to watch is the work being done on "AI agents"—bots that don't merely offer suggestions but autonomously perform sequences of actions to achieve higher-order goals. These could potentially allow system administrators to automate more of their everyday tasks. If AI agents become commonplace, it may also mean that servers need to evolve to handle interactions with third-party AI agents owned by other corporations or people.
While the full impact of AI on businesses is yet to be seen, Chapple believes that it is already having an immediate impact on today's IT professionals not unlike the emergence of PowerShell all those years ago. "It's about automating or reducing the toil in the day-to-day lives of the administrators—and developers for that matter," she says. "That helps make them more productive, and also helps get the most out of their IT infrastructure."
Learn more about Azure, adaptive cloud, and the upcoming Windows Server 2025: https://aka.Ms/WSnext
Understanding The Windows Copilot Runtime
It wasn't hard to spot the driving theme of Build 2024. From the pre-event launch of Copilot+ PCs to the two big keynotes from Satya Nadella and Scott Guthrie, it was all AI. Even Azure CTO Mark Russinovich's annual tour of Azure hardware innovations focused on support for AI.
For the first few years after Nadella became CEO, he spoke many times about what he called "the intelligent cloud and the intelligent edge," mixing the power of big data, machine learning, and edge-based processing. It was an industrial view of the cloud-native world, but it set the tone for Microsoft's approach to AI, using the supercomputing capabilities of Azure to host training and inference for our AI models in the cloud, no matter how big or how small those models are.
Moving AI to the edgeWith the power and cooling demands of centralized AI, it's not surprising that Microsoft's key announcements at Build were focused on moving much of its endpoint AI functionality from Azure to users' own PCs, taking advantage of local AI accelerators to run inference on a selection of different algorithms. Instead of running Copilots on Azure, it would use the neural processing units, or NPUs, that are part of the next generation of desktop silicon from Arm, Intel, and AMD.
Hardware acceleration is a proven approach that has worked again and again. Back in the early 1990s I was writing finite element analysis code that used vector processing hardware to accelerate matrix operations. Today's NPUs are the direct descendants of those vector processors, optimized for similar operations in the complex vector space used by neural networks. If you're using any of Microsoft's current generation of Arm devices (or a handful of recent Intel or AMD devices), you've already got an NPU, though not as powerful as the 40 TOPS (tera operations per second) needed to meet Microsoft's Copilot+ PC requirements.
Microsoft has already demonstrated a range of different NPU-based applications on this existing hardware, with access for developers via its DirectML APIs and support for the ONNX inference runtime. However, Build 2024 showed a different level of commitment to its developer audience, with a new set of endpoint-hosted AI services bundled under a new brand: the Windows Copilot Runtime.
The Windows Copilot Runtime is a mix of new and existing services that are intended to help deliver AI applications on Windows. Under the hood is a new set of developer libraries and more than 40 machine learning models, including Phi Silica, an NPU-focused version of Microsoft's Phi family of small language models.
The models of the Windows Copilot Runtime are not all language models. Many are designed to work with the Windows video pipeline, supporting enhanced versions of the existing Studio effects. If the bundled models are not enough, or don't meet your specific use cases, there are tools to help you run your own models on Windows, with direct support for PyTorch and a new web-hosted model runtime, WebNN, which allows models to run in a web browser (and possibly, in a future release, in WebAssembly applications).
An AI development stack for WindowsMicrosoft describes the Windows Copilot Runtime as "new ways of interacting with the operating system" using AI tools. At Build the Windows Copilot Runtime was shown as a stack running on top of new silicon capabilities, with new libraries and models, along with the necessary tools to help you build that code.
That simple stack is something of an oversimplification. Then again, showing every component of the Windows Copilot Runtime would quickly fill a PowerPoint slide. At its heart are two interesting features: the DiskANN local vector store and the set of APIs that are collectively referred to as the Windows Copilot Library.
You might think of DiskANN as the vector database equivalent of SQLite. It's a fast local store for the vector data that are key to building retrieval-augmented generation (RAG) applications. Like SQLite, DiskANN has no UI; everything is done through either a command line interface or API calls. DiskANN uses a built-in nearest neighbor search and can be used to store embeddings and content. It also works with Windows' built-in search, linking to NTFS structures and files.
Building code on top of the Windows Copilot Runtime draws on the more than 40 different AI and machine learning models bundled with the stack. Again, these aren't all generative models, as many build on models used by Azure Cognitive Services for computer vision tasks such as text recognition and the camera pipeline of Windows Studio Effects.
There's even the option of switching to cloud APIs, for example offering the choice of a local small language model or a cloud-hosted large language model like ChatGPT. Code might automatically switch between the two based on available bandwidth or the complexity of the current task.
Microsoft provides a basic checklist to help you decide between local and cloud AI APIs. Key points to consider are available resources, privacy, and costs. Using local resources won't cost anything, while the costs of using cloud AI services can be unpredictable.
Windows Copilot Library APIs like AI Text Recognition will require an appropriate NPU, in order to take advantage of its hardware acceleration capabilities. Images need to be added to an image buffer before calling the API. As with the equivalent Azure API, you need to deliver a bitmap to the API before collecting the recognized text as a string. You can additionally get bounding box details, so you can provide an overlay on the initial image, along with confidence levels for the recognized text.
Phi Silica: An on-device language model for NPUsOne of the key components of the Windows Copilot Runtime is the new NPU-optimized Phi Silica small language model. Part of the Phi family of models, Phi Silica is a simple-to-use generative AI model designed to deliver text responses to prompt inputs. Sample code shows that Phi Silica uses a new Microsoft.Windows.AI.Generative C# namespace and it's called asynchronously, responding to string prompts with a generative string response.
Using the basic Phi Silica API is straightforward. Once you've created a method to handle calls, you can either wait for a complete string or get results as they are generated, allowing you to choose the user experience. Other calls get status information from the model, so you can see if prompts have created a response or if the call has failed.
Phi Silica does have limitations. Even using the NPU of a Copilot+ PC, Phi Silica can process only 650 tokens per second. That should be enough to deliver a smooth response to a single prompt, but managing multiple prompts simultaneously could show signs of a slowdown.
Phi Silica was trained on textbook content, so it's not as flexible as, say, ChatGPT. However, it is less prone to errors, and it can be built into your own local agent orchestration using RAG techniques and a local vector index stored in DiskANN, targeting the files in a specific folder.
Microsoft has talked about the Windows Copilot Runtime as a separate component of the Windows developer stack. In fact, it's much more deeply integrated than the Build keynotes suggest, shipping as part of a June 2024 update to the Windows App SDK. Microsoft is not simply making a big bet on AI in Windows, it's betting that AI and, more specifically, natural language and semantic computing are the future of Windows.
Tools for building Windows AIWhile it's likely that the Windows Copilot Runtime stack will build on the existing Windows AI Studio tools, now renamed the AI Toolkit for Visual Studio Code, the full picture is still missing. Interestingly, recent builds of the AI Toolkit (post Build 2024) added support for Linux x64 and Arm64 model tuning and development. That bodes well for a rapid rollout of a complete set of AI development tools, and for a possible future AI Toolkit for Visual Studio.
An important feature of the AI Toolkit that's essential for working with Windows Copilot Runtime models is its playground, where you can experiment with your models before building them into your own Copilots. It's intended to work with small language models like Phi, or with open-source PyTorch models from Hugging Face, so should benefit from new OS features in the 24H2 Windows release and from the NPU hardware in Copilot+ PCs.
We'll learn more details with the June release of the Windows App SDK and the arrival of the first Copilot+ PC hardware. However, already it's clear that Microsoft aims to deliver a platform that bakes AI into the heart of Windows and, as a result, makes it easy to add AI features to your own desktop applications—securely and privately, under your users' control. As a bonus for Microsoft, it should also help keep Azure's power and cooling budget under control.
Copyright © 2024 IDG Communications, Inc.
Spacetop G1 Is A $1900 Laptop That Uses A Pair Of Augmented Reality Glasses As A Display
Mobile computing presents some unique challenges: laptops with big displays (or multiple screens) give you plenty of space for apps and games, but often at the cost of battery life and portability. Meanwhile laptops with thin and light designs, small screens, and/or long battery life address those challenges, but don't give you a lot of screen space for multitasking.
A startup called Sightful is taking an unusual approach to that dilemma: Sightful's Spacetop G1 is a mobile computer that's basically a laptop without a built-in screen, because it uses a pair of XREAL Air 2 Pro glasses to give you a virtual "100 inch" display. First unveiled about a year ago through a small early access program, the Spacetop G1 will be widely available later this year for $1900. Customers can reserve one now for $100.
So what does $1900 buy you? Basically a screenless laptop with a 79-key keyboard and multitouch touchpad, but no display. Instead it has two USB 3.2 Gen 2 Type-C ports that you can use to connect peripherals including AR glasses that come with the device, although you can also use those ports to connect a portable or desktop monitor instead.
The glasses feature a pair of 1920 x 1080 pixel OLED displays with 90 Hz refresh rates, a 50 degree field of view, and 42 PPD (pixels per degree), as well as support for custom prescription lenses. XREAL's glasses also include stereo 6W open-ear speakers.
Due to the limited field of view, Sightful's claims that this is a laptop with a 100 inch display are a little dubious – you can't actually see the whole thing at once, and instead have to move your head to view different portions of that virtual viewscape. But I suppose that might be true if you were actually sitting too close to a real 100 inch display.
While the design is certainly unusual, so is the software, because Sightful says that this isn't your usual Windows, Mac, or Linux laptop. Instead it runs a custom operating system called SpaceOS, which is a built on top of Google's ChromiumOS (the open source version of the software that's runs on Chromebooks), which it's designed to let you set up a virtual desktop optimized for the Air 2 Pro glasses, and designed for navigation via custom gestures.
And that software runs on hardware that's… basically what you'd expect from a decent smartphone. The Spacetop G1 features a Qualcomm Snapdragon QCS8550 processor, 16GB of LPDDR5 memory, and 128GB of UFS 3.1 storage.
With Adreno 740 graphics and a Hexagon NPU, Sightful says the system supports up to 48 TOPS of total AI performance… which would be more impressive if Qualcomm hadn't just launched its Snapdragon X Plus and Elite chips which deliver 45 TOPS using just the NPU, while also offering CPU and graphics performance that are said to be competitive with Intel, AMD, and Apple processors.
One thing the Spacetop G1 has going for it is a decent selection of wireless capabilities, with support for WiFi 7, Bluetooth 5.3, and 5G Sub-6 GHz and/or 4G LTE network bands. There's also dual SIM support thanks to a nano SIM card slot and an eSIM.
The Spacetop G1 also features a 60 watt battery, support for 63W fast charging with a USB-C power adapter. Sightful says you should be able to get up to 8 hours of battery life on a charge.
There's also a built-in 5MP camera that lets you participate in video calls at resolutions up to 2592 x 1944 pixels.
While it's certainly true that the computer is more portable than a laptop with a physical 100 inch display would be, at 1.4kg (3.1 pounds), the Spacetop G1 isn't actually all that lightweight for a screenless laptop. There are plenty of 13 and 14 inch notebooks that weigh less these days.
The Spacetop G1 features a magnesium body with a hard cover to help protect the glasses when they're stored inside. The whole thing measures 300 x 231 x 62mm (11.8″ x 9.1″ x 2.4″) at its thickest point, which also makes it a bit bulkier than most modern thin and light notebooks (although it's only 13mm or 0.5 inches at its thinnest point).
The AR glasses weigh 85 grams (3 ounces), but they may feel a bit heavier thanks to the USB cable that you'll need to use to tether them to the Spacetop G1.
Sightful has upgraded some of the hardware since launching last year's early access model. The G1 has a processor that the company says is 70% faster, several hours of additional battery life, twice as much RAM, and support for faster wireless networking.
The Spacetop G1 has a 928 x 262 pixel OLED mini-display above the keyboard, which can show status information, QR codes, or other information. This is an upgrade over the 1.54 inch, 200 x 200 pixel black and white ePaper screen used in the previous model.
And based on feedback from early users, the webcam has been upgraded to better support recording in low-light conditions, and shifted to a higher position for improved angles during video calls.
But it still seems like a pretty tough sell, especially with a $1900 price tag. It's an ARM-based PC with a custom operating system that's designed first and foremost to be used with a set of augmented reality glasses rather than a more traditional display. While Sightful says it should support most web apps, it's unclear what kind of compatibility there will be with the kind of native desktop apps you can run on most PCs.
Eight hours of battery life isn't awful, but it's also not stellar by 2024 standards. You'll have to take off your wearable screens every time you step away from the computer. And if you want to share what your looking at with other people nearby, you'll either need to hand them your glasses or connect a monitor.
Don't get me wrong. It's exciting to see companies thinking outside the box and trying to imagine new form factors for personal computers. I'm just not sure that a screenless-laptop-with-AR glasses is going to be the one that takes off… particularly not an expensive model with a custom OS.
Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).
But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.
Contribute to our Patreon campaignor...
Contribute via PayPal * If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it. Join 9,554 other subscribers
Comments
Post a Comment