• Contact
  • EnglishEnglish
    • EspañolEspañol
    • FrançaisFrançais
    • NederlandsNederlands
    • DeutschDeutsch
    • हिन्दीहिन्दी
    • PortuguêsPortuguês
    • 日本語日本語
+44 20 7808 6300 enquiries@nscglobal.com
  • Who we are
    • Our People
    • Our Film
    • Our Story
    • Our Values
    • Environmental Sustainability
  • What we do
    • Supplying technology
    • Managing infrastructure
    • Managing people
    • Transforming business
    • Supporting operations
    • Planning for the future
  • Service portfolio
    • SmartTeam App
    • Next Day WAN
    • YouCPE
    • Cableless enterprise
    • Anytime anywhere
    • Changing world
    • SASE
  • News
    • NSC News
    • Industry News
    • Case studies
  • Partners
    • Strategic partners
    • Vendor partners
    • Global partner management
    • Partner selection
    • Continuous improvement
  • Work with us
  • Careers
  • Search
  • Menu Menu

So long, Internet Explorer. The browser retires.

17th June 2022/in Latest Posts/by ricardo

Internet Explorer is finally headed out to pasture.

As of this past Wednesday, Microsoft will no longer support the once-dominant browser that legions of web surfers loved to hate—and a few still claim to adore. The 27-year-old application now joins BlackBerry phones, dial-up modems and Palm Pilots in the dustbin of tech history.

IE’s demise was not a surprise. A year ago, Microsoft said that it was putting an end to Internet Explorer on June 15, 2022, pushing users to its Edge browser, which was launched in 2015.

The company made clear then it was time to move on.

“Not only is Microsoft Edge a faster, more secure and more modern browsing experience than Internet Explorer, but it is also able to address a key concern: compatibility for older, legacy websites and applications,” Sean Lyndersay, general manager of Microsoft Edge Enterprise, wrote in a May 2021 blog post.

Users marked Explorer’s passing on Twitter, with some referring to it as a “bug-ridden, insecure POS” or the “top browser for installing other browsers.” For others it was a moment for 90′s nostalgia memes, while The Wall Street Journal quoted a 22-year-old who was sad to see IE go.

Microsoft released the first version of Internet Explorer in 1995, the antediluvian era of web surfing dominated by the first widely popular browser, Netscape Navigator. Its launch signaled the beginning of the end of Navigator: Microsoft went on to tie IE and its ubiquitous Windows operating system together so tightly that many people simply used it by default instead of Navigator.

The Justice Department sued Microsoft in 1997, saying it violated an earlier consent decree by requiring computer makers to use its browser as a condition of using Windows. It eventually agreed to settle the antitrust battle in 2002 over its use of its Windows monopoly to squash competitors. It also tangled with European regulators who said that tying Internet Explorer to Windows gave it an unfair advantage over rivals such as Mozilla’s Firefox, Opera and Google’s Chrome.

Users, meanwhile, complained that IE was slow, prone to crashing and vulnerable to hacks. IE’s market share, which in the early 2000s was over 90%, began to fade as users found more appealing alternatives.

Today, the Chrome browser dominates with roughly a 65% share of the worldwide browser market, followed by Apple’s Safari with 19%, according to internet analytics company Statcounter. IE’s heir, Edge, lags with about about 4%, just ahead of Firefox.

Courtesy of: Richard Jacobsen

https://nscglobal.com/wp-content/uploads/2022/06/IE.jpeg 225 225 ricardo https://nscglobal.com/wp-content/uploads/2020/10/nsc_global_logo_blue.svg ricardo2022-06-17 13:58:172022-12-08 10:42:16So long, Internet Explorer. The browser retires.

There’s yet another really good reason to patch your router now

10th June 2022/in Latest Posts/by ricardo

Most routers remain unpatched despite security risks, experts warn.

There are hundreds of vulnerabilities plaguing routers of all shapes and sizes, and most of them have not been patched, new analysis from Kaspersky has warned.

The company’s report says that in 2021, there had been a total of 506 new vulnerabilities discovered, out of which 87 were deemed as critical. Of those, a third (almost 30) have not been addressed by their respective vendors, whatsoever, while another 26% were important enough to only get an advisory.

Sometimes, these advisories are followed up with a patch, the researchers are saying, but most of the time, they just tell potential victims to reach out to customer support.

The absolute worst year for the discovery of critical flaws in router endpoints was 2020 – the year of the Covid-19 pandemic, and the subsequent rush to remote working. That year, Kaspersky says, 603 new vulnerabilities were discovered, almost three times as many as the year before (207).

These two things are correlated, the researchers further claim, as remote working put most employees at the mercy of their (unpatched and unprotected) home routers. While most workers these days know relatively well how to protect their computers, laptops, and mobile devices, they’re clueless what to do with their routers. 

According to figures from Broadband Genie, half (48%) have never changed their router’s settings, including the default login credentials, and their Wi-Fi password. Three quarters (73%) don’t think it’s necessary, while 20% don’t know how to change these things. 

To keep any internet-connected device secure, there are a number of things a person (or company) can do: keep both firmware, and software, updated to the latest version, at all times; install a strong antivirus solution, as well as a firewall; activate multi-factor authentication on any services available, and use a Virtual Private Network (VPN) service. 

For routers, specifically, users should always use WPA2 encryption, disable remote access to the router, select a static IP address, disable DHCP, and use a MAC filter.

https://nscglobal.com/wp-content/uploads/2022/06/Router.jpeg 339 508 ricardo https://nscglobal.com/wp-content/uploads/2020/10/nsc_global_logo_blue.svg ricardo2022-06-10 15:47:522022-12-08 10:42:17There’s yet another really good reason to patch your router now

Five areas where EA matters more than ever

20th May 2022/in Latest Posts/by ricardo

Businesses are employing enterprise architecture to improve product delivery, risk management, and even employee retention, among other business-critical uses.

The discipline of enterprise architecture (EA) is often criticized for forcing technology choices on business users or producing software analyses no one uses.

But the practice of EA is booming today, and architects of any description are hard to find and “incredibly expensive,” says Gartner Research VP Marcus Blosch.

Forrester Research has identified more than 20 types of enterprise architecture roles being used by its clients. They range from organizational architects who define business and operating models to projects, platforms, and digital architects. “The list is growing and growing,” says Gordon Barnett, principal analyst at Forrester, “moving away from EA that just looks at applications and infrastructure to be truly enterprise. You need an ecosystem of subject-matter experts. You may not call them architects, but they still do the role of an architect.”

Here’s how savvy EA practitioners are helping businesses meet three of today’s most pressing business challenges.

Resiliency and adaptability 

With everything from COVID shutdowns to economic sanctions disrupting operations and supply chains,  enterprises are turning to insights from EA to predict and respond to problems more quickly and effectively.

While resiliency has always been a focus of EA, “the focus now is on proactive resiliency” to better anticipate future risks, says Barnett. He recommends expanding EA to map not only a business’ technology assets but all its processes that rely on vendors as well as part-time and contract workers who may become unavailable due to pandemics, sanctions, natural disasters, or other disruptions. 

Businesses are also looking to use EA to anticipate problems and plan for capabilities such as workload balancing or on-demand computing to respond to surges in demand or system outages, Barnett says. That requires enterprise architects to work more closely with risk management and security staff to understand dependencies among the components in the architecture to better understand the likelihood and severity of disruptions and formulate plans to cope with them.

EA can help, for example, by describing which cloud providers share the same network connections, or which shippers rely on the same ports to ensure that a “backup” provider won’t suffer the same outage as a primary provider, he says.

Planning for supply chain disruption

Mike Small, head of the North American region at engineering and digital solutions firm AKKA & Modis (soon to become Akkodis), says EA is helping businesses such as vehicle manufacturers understand whether and how they can ship products without the full complement of hard-to-find components such as semiconductors.

Some of his clients have brought enterprise architects together with the product, analytics, and supply chain experts to ask “Can I still safely sell this product without 100% of the bill of materials? If the answer is yes, how do I go about doing that when my system was designed for zero deviation from the product specifications?” he says. That may require, for example, an analysis of ERP systems to understand all the dependencies and functions that reference a bill of materials, he says.

Radicle Science, which provides online services to measure the effectiveness of health and wellness products, uses EA to track the APIs and data formats used by its data suppliers so changes don’t disrupt the business, says CTO Sheldon Borkin. “The further we got in our modeling about third-party logistics suppliers” the more Radicle realized the need to map not only the APIs each supplier used but the format in which they stored the data they provided to Radicle. “We need to write an adapter to each API and build into EA the need for such adapters and the ability to track them,” he says.

Staff recruitment and retention

With staff shortages hobbling industries across the globe, improving the employee experience to retain essential talent has become a strategic imperative. Businesses are using EA to provide not only better applications and services but a work experience that will attract and retain workers.

AKKA & Modis has revamped everything from its network to its authentication tools in the last several years “to provide the same experience in a physical office or a remote location,” Small says. “Remote work has driven our clients to reconfigure their enterprise architecture plan and strategy. EA is essential to ensuring these remote work and collaboration tools are both scalable and secure,” he says. “That is a key differentiator we see in both recruiting and retaining talent.”

EA also “plays a critical role in ensuring new staff can be brought on board as quickly and easily as possible” by understanding which applications and services a new employee needs on their first day and ensuring they have that access, Small says. Making new hires feel productive quickly is essential to retaining them, he says.

Barnett says “people architecture” is a growing form of EA that seeks to understand how changes such as outsourcing, downsizing, and automation affect staff and how to help them adapt. In recent years, for example, much of the work done by a network engineer has been automated. Understanding how such changes affect them, and how to use their skills in new areas, is essential to both retaining employees and maximizing their productivity, he says. 

Improved product and service delivery

Across industries, nimble companies need to focus their IT efforts on the products and services customers need the most rather than implementing technology for its own sake.

Small sees a trend toward redeploying staff that formerly worked in standalone teams on technologies such as cloud or data analytics into “project-based pods to address urgent needs. Enterprise architecture is the glue binding these units together, looking at overall business needs” and ensuring the product or service they develop works well, he says.

At Wells Fargo, EA provides the cloud reference architecture that supports the financial services giant’s move to the cloud, agile software development, and delivering more new “products” such as applications that guide first-time job holders through creating a bank account and getting their first credit card, says Chief Enterprise Architect Manish Vipani.

EA also helps Wells Fargo integrate customer-facing and back-office applications to create a more consistent experience across channels such as in-person, Web, telephone, and mobile applications. Its EA practice is helping it transition from a “spaghetti-like” architecture of point-to-point connections “to more of a `lasagna’ type architecture with well-defined tiers connected via APIs, Vipani says. 

The work of its hundreds of EAs helped Wells Fargo to, for example, use such data integration to modernize a home mortgage online application process by pre-populating some data into customers’ applications. A similar approach helped shorten a credit card application process from an average of 25 minutes to four minutes, he says. If improved integration can tell the credit card application that a customer has a direct deposit with the bank, it can prequalify them for a card and process the application with a click of a mouse.

“We usually want to roll the product out quickly, and if it is successful, roll it out on a larger scale,” he says. “That requires understanding the systems, whether a core banking platform or wealth management, that are all exposed through APIs.”

Tracking data and APIs

As Radicle works to complete more virtual trials of health products more quickly and prove their results to customers, “our software platform needs to choreograph the people, participants, and supplies to make this scalable,” says Borkin.

To meet this goal, it is redefining the EA components it measures to understand, for example, which data must be stored in which formats so it can be studied over time. “What is important is the communication between the technology team and the research team which runs the trial,” he says. “We agree as a company on the critical data we collect and how it’s organized.”

Radicle is also using EA insights to determine not only how many and what types of databases to maintain, but when data should be exchanged via point-to-point APIs, typically between internally developed services or a one-off integration, rather than through a common service layer designed to be extended over time.

“As we’re building a multiyear repository of research data, just saying `we have APIs’ is not going to be enough,” he says. “You might not know all the APIs you need to connect to new data sources or services such as for data normalization that need to interact with the research repository.” Radicle uses EA to “make sure that this key data asset is built to expand in breadth and depth while remaining easy to access for discovering new health insights.” 

Courtesy of: Robert Scheier

https://nscglobal.com/wp-content/uploads/2022/05/EA.jpeg 168 300 ricardo https://nscglobal.com/wp-content/uploads/2020/10/nsc_global_logo_blue.svg ricardo2022-05-20 12:39:442022-12-08 10:42:17Five areas where EA matters more than ever

NVIDIA Embraces Open Source GPU Kernel Drivers, Starting In The Datacenter

13th May 2022/in Latest Posts/by ricardo

NVIDIA‘s relationship with the open-source community has historically been a tale of intense on-again-off-again interactions. In recent years, the company has embraced open source more closely as of late, but its past refusal to open-source its graphics drivers, citing trade secrets and proprietary technology, has continued to be a point of friction between Team Green and Linux kernel developers.

Well, that all ends now, because NVIDIA has announced it will open-source the kernel portion of its graphics drivers. To be clear, there are huge swaths of NVIDIA’s graphics drivers that are remaining closed-source, including all of the client-facing portions of the code as well as its drivers for OpenGL, Vulkan, OpenCL, and so on.

susedesktop
Red Hat, Ubuntu, and SUSE (pictured) are the three distros immediately implementing the new driver.

Still, this announcement is a humongous deal for open-source advocates, as it means that the NVIDIA graphics driver can be properly integrated into the kernel and use GPL-only kernel symbols and functionality. This will also result in improved integration with various subsystems, including HWMON, reducing the reliance on proprietary NVIDIA tools for hardware monitoring and management.

It would be easy to draw a connection between this change and the relative success of SteamOS on the Steam Deck, but gaming isn’t really the play here. After all, the current state of the open-source drivers for desktop GeForce hardware is considered “alpha” by NVIDIA, meaning they’re really not ready for prime-time.

Nvidia A100
Datacenter GPUs like the A100 above are the initial focus of this release.

Instead, these drivers are primarily for the company’s datacenter hardware, at least for now—although everything using the Turing and Ampere architectures is supported. The “Turing and newer” limitation is apparently down to these drivers’ reliance on those GPUs’ inclusion of the GPU System Processor, which we talked about previously.

Of course, given that they’re open source, things probably won’t stay that way for long. NVIDIA itself says work is fully underway in transitioning to the open-source kernel driver as its primary Linux driver, and it’s also inviting community members to create and contribute their own patches—although you’ll have to sign a Contributor License Agreement to do so.

nvidia install branch diagram
A diagram from NVIDIA showing how to install the new open-sourced driver.

As for why NVIDIA made the decision to do this now, it’s difficult to say. The company itself doesn’t really elaborate in its announcement, beyond saying that it will help “improve NVIDIA GPU driver quality and security,” which is a given. It’s possible that the LAPSUS$ hack could have played a part in this choice, although as far as we’ve seen the majority of the stolen data was never released.

Whatever the reason, this is fantastic news for the open-source community. Everyone benefits from open-source software development, and this is the first step toward a fully open-source graphics driver for the green team’s hardware. If you’re a developer yourself, you can check out the code on Github right now.

Courtesy of: Zak Killian

https://nscglobal.com/wp-content/uploads/2022/05/Nvidia.jpeg 168 300 ricardo https://nscglobal.com/wp-content/uploads/2020/10/nsc_global_logo_blue.svg ricardo2022-05-13 16:20:252022-12-08 10:42:17NVIDIA Embraces Open Source GPU Kernel Drivers, Starting In The Datacenter

Dell offers data, app recovery support for multicloud assets

6th May 2022/in Latest Posts/by ricardo

Dell is adding data recovery solutions to its APEX portfolio, for data centers and public clouds including Azure and AWS.

Dell is offering an expanded ecosystem of multicloud data management tools for its customers with a focus on data recovery services, adding recovery vault support for on-premises as well as public cloud assets.

“Our customers want help reducing complexity and are seeking solutions that use a common approach to managing data wherever it lives — from public clouds, to the data center, to the edge,” said Chuck Whitten, co-chief operating officer, Dell Technologies, in a statement. “We are building a portfolio of software and services that simplifies on-premises and multicloud environments and offers.” 

System aims to help recover from cyberattacks

As the first leg of this effort, Dell has expanded its APEX portfolio, an on-premises IaaS (infrastructure-as-a-service ) solution managed in the cloud, with the introduction of Dell APEX Cyber Recovery Services. This addition is aimed at simplifying recovery from cyberattacks by managing a day-to-day cyberrecovery vault with other Dell-assisted recovery options.

A Dell cyberrecovery vault is an isolated environment, where backups of critical data are kept physically and logically isolated from the other systems and locations. The vault has a recovery path designed to minimize downtime, expense, and lost revenue from a cyberattack.

Dell claims expertise from 2,000 isolated vault solutions deployed globally. Dell APEX Cyber Recovery Services was made available in the US this month, with broader availability planned for later this year.

“While the interconnectedness from multi/hybrid cloud environments can provide significant flexibility, it also increases the need for sophisticated recovery capabilities that ensure business resilience,” says Gary McAlum, senior analyst at TAG Cyber. “In today’s cyberthreat environment, companies of all sizes are being increasingly targeted by destructive and disruptive attacks that threaten business operations. Clearly, Dell recognizes this opportunity with their new cyberrecovery capabilities.”

Cyberrecovery supports Azure and AWS

Dell plans to bolster the reach of its multicloud ecosystem by adding recovery support on partnered public clouds including Microsoft Azure and AWS (Amazon Web Services).[ Learn how IT can harness the power and promise of 5G in this FREE CIO Roadmap Report. Download now! ]

Within its data protection offerings for public clouds, Dell is releasing Dell PowerProtect Cyber Recovery for Microsoft Azure on top of existing Dell offerings through the Microsoft Azure marketplace, which will allow organizations to deploy an isolated cybervault in the public cloud to securely isolate and protect data away from a ransomware attack. The Microsoft Azure recovery environment (or vault) could be deployed on within data centers, in a new Azure private network, or in an unimpacted Azure environment, Dell says.

Also adding to this effort is the announcement of CyberSense for Dell PowerProtect Cyber Recovery for AWS. CyberSense will allow organizations to use adaptive analytics, scan metadata, and complete files, and implement machine learning and forensic tools to detect, diagnose and speed data recovery. It will also monitor databases to trace back to the last uncorrupted copy of the data to effect speedy recovery.

“The technologies used by businesses for public cloud integrations are increasingly automated and highly effective,” says McAlum. “Dell’s cyber recovery tools will undoubtedly build on the foundation of cloud integration technology to deliver a user-friendly and most likely, seamless experience. This offering should be a welcome addition to an already strong portfolio of business-enablement capabilities found in their APEX portfolio.”

According to McAlum, integrated cyber recovery is still an evolving market, with only a few vendors, including IBM and Hewlett Packard Enterprise, offering the same level of comprehensive, multicloud recovery capabilities that Dell does.

Both Dell PowerProtect Cyber Recovery for Microsoft Azure and CyberSense will be globally available in the second half of 2022.

Courtesy of: Shweta Sharma

https://nscglobal.com/wp-content/uploads/2022/05/Dell-HQ.jpeg 168 300 ricardo https://nscglobal.com/wp-content/uploads/2020/10/nsc_global_logo_blue.svg ricardo2022-05-06 17:58:452022-12-08 10:42:17Dell offers data, app recovery support for multicloud assets

Dropbox unplugged its own datacenter – and things went better than expected

29th April 2022/in Latest Posts/by ricardo

Two years of disaster planning massively reduced recovery time objective, company says.

If you’re unsure how resilient your organization is to a disaster, there’s a simple way to find out: unplug one of your datacenters from the internet and see what happens.

That’s what Dropbox did in November, though with a bit more forethought. It had been planning to take the San Jose datacenter (its largest) offline for some time, and performed extensive tests prior to the actual event. It actually took all three datacenters in the city offline by physically pulling each site’s main fiber connection from its port.

Dubbed the “SJC blackhole,” the experiment was determined to be a success after 30 minutes had elapsed with what Dropbox described as no impact to its global availability. “In the unlikely event of a disaster, our revamped failover procedures showed that we now had the people and processes in place to offer a significantly reduced RTO [recovery time objective],” Dropbox said in a postmortem of the event.

According to the company, RTOs were reduced from eight to nine minutes down to four or five.

What was Dropbox thinking?

After parting ways with previous hosting service AWS and building its own datacenters, Dropbox said it realized there was a problem: its metadata was highly replicated, but block data wasn’t. “Given San Jose’s proximity to the San Andreas Fault, it was critical we ensured an earthquake wouldn’t take Dropbox offline,” the company said.

The first attempt Dropbox made to eliminate its centrality was called Magic Pocket, a system that distributes block data to multiple datacenters, which can serve portions of files at the same time, without worries about a single datacenter outage eliminating service. This is known as an active-active system because multiple nodes are serving files to users simultaneously.

Dropbox ultimately settled on an active-passive failure model, which still replicates blocks across multiple datacenters, but only serves files from a single location. It said this was necessary to implement its plan because of limitations imposed by how Dropbox itself chose to manage metadata.

“These choices severely limited our architectural choices when designing an active-active system, and made the resulting system much more complex,” Dropbox said.

Failing over and over

A May 2020 failover tooling failure caused a 47-minute long service outage, which pushed Dropbox into high gear on improving its disaster recovery systems. It started by implementing a dedicated disaster recovery team, which rebuilt Dropbox’s failover-handling software before running tests, of which the November 2021 shutdown was part.

Testing began at Dropbox’s two Dallas Fort Worth datacenters, and initially things were less than smooth – due to the team not realizing all of its S3 proxies were running from the datacenter it took offline. A second test proved more successful, which led to the San Jose experiment. 

“Much like our second DFW test, we saw no impact to global availability—and ultimately reached our goal of a 30-minute SJC blackhole,” Dropbox said. 

Dropbox’s postmortem is worth paying attention to: not only did it find a way to successfully distribute its services and make its entire system more resilient, it also shows the type of work it takes for a large enterprise to commit to that type of project.

The entire effort to improve resiliency was described by Dropbox as a multi-year, multi-team project. Its nature as a cloud service may mean Dropbox is more complex than other enterprises, but that should serve as a motivator: disaster recovery planning in other companies may be a lot easier.

Dropbox also recommends that other companies perform regular disaster recovery practise exercises. “Like a muscle, it takes training and practise to get stronger.” ®

Courtesy of: Brandon Vigliarolo

https://nscglobal.com/wp-content/uploads/2020/02/cisco-logo-750x500-1.png 500 750 ricardo https://nscglobal.com/wp-content/uploads/2020/10/nsc_global_logo_blue.svg ricardo2022-04-29 11:25:502022-12-08 10:42:18Dropbox unplugged its own datacenter – and things went better than expected

2 years later: Enterprise hardware shifts are here to stay

22nd April 2022/in Latest Posts/by ricardo

Companies are eliminating desktop PCs in favor of mobile gear, adjusting provider strategies to fill their needs.

Few business continuity and disaster planners ever envisioned setting up most of their employees in home offices almost overnight, as the COVID-19 pandemic caught IT unprepared.

Procuring the laptops, headsets, monitors, printers, Wi-Fi routers, tablets, PCs, cabling, and other gear this newly distributed workforce required, set off a mad scramble as harried IT shops and procurement professionals frantically called, texted, and emailed every vendor on their lists only to be told time and again nothing was available. 

Eventually, the situation was remedied through a combination of relaxed — or hurriedly introduced — bring your own device (BYOD) polices, acquiring hardware locally, or having IT literally box up on-site hardware for employees to take home.

“We didn’t have a lot of pull directly with [our preferred vendor] Dell,” said Brian Shea, CIO of MedOne Hospital Physicians, a mid-sized provider of clinical specialists to hospitals. “So, we’d go local, which, in my past life [as the CTO Nationwide Children’s Hospital in Columbus], we would never do. Then we started using Amazon a little bit. We had more flexibility searching multiple vendors.”

In the two years since the initial response to the pandemic, hardware purchasing trends at the enterprise level have shifted to ensure companies don’t neglect their hardware needs. They are moving purchasing away from fixed assets like desktop PCs in favor of more mobile gadgets, adjusting their provider strategy to match their needs. For many businesses, analysts expect, changes are here to stay.

Many companies also provided employees with stipends so they can get what they need on their own, said Andrew Hewitt, a senior infrastructure and operations analyst at Forrester. 

This approach took the onus off of severely overworked IT departments to find, buy, and ship everything to their newly minted work-from-home workforce.

“You basically saw enterprises go in about 10 different directions just trying to get something into the hands of people so that they could get their work done,” said Hewitt.

What companies bought

Some IT shops went so far as to set up their customer service reps with thin clients so they could access virtual desktop infrastructure (VDI), but most organizations opted for anything portable and that could run a web browser, Hewitt said. 

“We had one client, for instance, that was sending home thin clients along with hardware based tokens for authentication,” he said. “They had to order a ton of tokens right at the beginning of the pandemic to enable that. Where something that was more cloud friendly, like a Chromebook or a personal PC, is going to have that authentication mechanism built into it.”

This move to portable devices represents a significant shift in hardware usage and buying patterns. 

Two-thirds of respondents to a recent Spiceworks Ziff Davis (SWZD) report, Hardware Trends in 2022 and Beyond, said desktop PCs were the primary computing device in their organizations in 2018. Four years later, 40% of employees are using laptops and 40% are using desktops. 

As a percentage of spend, companies today spend more on laptops than desktops. Mobile devices such as smartphones and tablets also are being used as work devices today, particularly in Asia-Pacific and Latin American countries.

“This reallocation of spending happened during the pandemic, with the shift to remote work serving as the catalyst,” the study said.

The report also found that, despite growth of cloud computing in all its forms – SaaS, IaaS, and PaaS during the pandemic – only about half of all workloads today are running the cloud.

Despite reports of a stampede toward the cloud, half of workloads are still running in on-premises data centers and server rooms. 

Hardware spending as a percentage of overall IT spending decreased since the start of the pandemic, dropping from 33% in 2020 to 30% in 2022. Servers accounted for just 14% of spending in 2020 and that number is expected to drop to 11% in 2022.

At the same time, cloud spending only increased moderately from 22% in 2020 to 26% of overall IT budgets in 2022, the report said.

“But make no mistake: On-premises servers remain extremely important to organizations worldwide … on-premises and cloud infrastructure will co-exist and grow increasingly interoperable, allowing for greater portability and flexibility that will benefit organizations in a hybrid world,” the report said.

Change is here to stay

For many companies, the old ways of doing business will not work anymore. 

Shea, for example, said he won’t go back to relying on just one vendor. Being a mid-sized company with just a couple of hundred employees that buys hardware only as needed means they do not get the preferential treatment an enterprise-class customer would. 

“We have not moved back to using Dell … we will continue to buy peripherals through local sources or channels like Amazon. We’re able to see the flexibility in pricing by looking across multiple suppliers.” 

For most companies, hardware has become essential again. The pandemic showed reliance on non-mobile assets such as desktops or thin clients can create a lot of problems when global events spin out of control. 

Citing recent Forrester data, Hewitt said only 10% of companies plan to reduce hardware spending on PCs in 2022. This includes laptops, tablets, Chromebooks, and Macs. Half of businesses said they planned to increase their PC spending this year and 25% said they would remain the same.

“A lot of organizations have seen the value from a business continuity perspective of physical hardware,” he said.

Courtesy of: Allen Bernard

https://nscglobal.com/wp-content/uploads/2022/04/Desktop.jpeg 364 770 ricardo https://nscglobal.com/wp-content/uploads/2020/10/nsc_global_logo_blue.svg ricardo2022-04-22 09:38:162022-12-08 10:42:182 years later: Enterprise hardware shifts are here to stay

Google Cloud just built a data lakehouse on BigQuery

8th April 2022/in Latest Posts/by ricardo

BigLake, a new data lake storage engine that resembles data lakehouses built by newer data companies, will be at the center of Google Cloud’s data platform strategy.

Google Cloud plans to launch a new data lake storage engine based on its popular BigQuery data warehouse to help remove barriers preventing customers from mining the full value of their ever-increasing data.

BigLake, now available in preview, allows enterprises to unify their data warehouses and data lakes to analyze data without worrying about the underlying storage format or systems, according to Sudhir Hasbe, Google Cloud’s senior director of Product Management for data analytics.

“The biggest advantage is then you don’t have to duplicate your data across two different environments and create data silos,” Hasbe said in a press briefing prior to Wednesday’s Google Data Cloud Summit, where BigLake is being announced.

With BigLake, Google Cloud is extending the capabilities of its 11-year-old BigQuery to data lakes on Google Cloud Storage to enable a flexible, open lakehouse architecture, according to the cloud provider. A data lakehouse is an open data-management architecture that combines data-warehouse-like data management and optimization functions, including business intelligence, machine learning and governance, for data lakes that typically provide more cost-effective storage.

BigQuery is a Google Cloud-managed, serverless, multicloud data warehouse that lets customers run analytics over vast amounts of data in near real time. It processes more than 110 terabytes of customers’ data every second on average, according to Google Cloud.

“We have tens of thousands of customers on it, and we invested a lot in all the governance, security and all the core capabilities, so we’re taking that innovation from BigQuery and now extending it onto all the data that sits in different formats as well as in lake environments — whether it’s on Google Cloud with Google Cloud Storage, whether it’s on AWS or whether it’s on [Microsoft] Azure,” Hasbe said.

BigLake will be at the center of Google Cloud’s data platform strategy.Image: Google Cloud

BigLake will be at the center of Google Cloud’s data platform strategy, and the cloud provider will ensure that all its tools and capabilities integrate with it, according to Hasbe.

“We are going to seamlessly integrate our data management and governance capability with Dataplex, so any data that goes into BigLake will be managed [and] governed in a consistent fashion,” he said. “All of our machine-learning and AI capabilities … will also work on BigLake, as well as all our analytics engines, whether it’s BigQuery, whether it’s Spark, whether it’s Dataflow.”

Enterprise data sets are growing from terabytes to petabytes, while the types of data — from structured, semi-structured and unstructured data to IoT data collected from connected devices including sensors and wearables — also are increasing. That data typically is stored across different systems with different capabilities, whether in data warehouses for structured and semi-structured data or data lakes for other types of data, creating so-called data silos that could limit access and increase costs and risks, particularly when the data must be moved.

BigLake will support all open-source file formats and standards including Apache Parquet and ORC and new formats for table access such as Iceberg, as well as open-source processing engines such as Apache Spark.

“When you think about limitless data, it is time that we end the artificial separation between managed warehouses and data lakes,” said Gerrit Kazmaier, Google Cloud’s vice president and general manager for database, data analytics and Looker. “Google is doing this in a unique way.”

Courtesy of: Donna Goodison

https://nscglobal.com/wp-content/uploads/2022/04/Google-cloud.png 168 300 ricardo https://nscglobal.com/wp-content/uploads/2020/10/nsc_global_logo_blue.svg ricardo2022-04-08 12:41:472022-12-08 10:42:18Google Cloud just built a data lakehouse on BigQuery

Cisco DevNet certs jump 50%, fanned by network automation

1st April 2022/in Latest Posts/by ricardo

Cisco’s DevNet certifications emphasize network programing, automation, and application-development skills.

Software skills are becoming increasingly desirable for network engineers and architects, and the uptick in Cisco’s DevNet certification program reflects the trend. According to Cisco, there’s been more than 50% growth in the number of DevNet certifications awarded in the past year.

Cisco says it no longer gives out specific numbers, but in 2020, nearly 8,000 participants earned some 10,500 DevNet certifications, including DevNet Associate, DevNet Professional and DevNet Specialist. These DevNet certifications focus on coding, automation, network access, IP connectivity, security and application development on Cisco platforms, as well as what developers need to know about network doctrines.

Most recently, Cisco announced a new expert-level DevNet certification: Cisco Certified DevNet Expert. Exam topics for DevNet Expert are focused on software skills and include software development, deployment, and design; infrastructure as code; containers, network programmability and automation; and security.

“While CCIE topics have delved deep into the realm of protocol interaction, network design, and reliable and scalable infrastructure, with automation as part of that, the DevNet Expert takes a solid software-first approach as it pertains to the network engineer,” wrote Joe Clarke, a distinguished customer experience engineer at Cisco, in a blog announcing the new certification level.

The DevNet Expert exam is geared for network engineers who are working with new, automation-driven networks that lead to digital transformations across all industries, according to Clarke. “To deliver secure, agile networks, support the future of work, and provide capabilities at the edge, you need people who are experts at wielding software and automation to harness and shape the power of the network,” Clarke said.

Those who earn DevNet Expert certifications will be able to design and deliver the necessary automation solutions to transform a traditional network into one that enables digital transformation; they’ll also be able to build and lead a team to transform the culture of an organization into one that embraces automation as the way to do networking, Clarke stated.

“You don’t want to keep touching things manually. You want to build a nice hierarchy with a single source of truth that has a very clear picture of what the network should look like,” Clarke said. “And then you want to automate not only the deployment of that, but the testing of that, so that you can get more confidence with the configuration changes you’re making. So maybe everything is not fully software defined, but it’s becoming more software driven.”

At the same time, the need for cloud networking, mobility and observability capabilities is also on the rise.

“The idea of total observability – to know that all of your services are healthy and how those applications are performing – is a hot topic right now,” Clarke said. “Nothing happens without APIs and touch points, and as we move more toward the cloud and more remote users, it will be more important to have network engineers be able to develop centralized policies for all those users to have the same experience.”

Data analytics and the ability to extract and make sense of key data from large data sets are growing requirements, too, Clarke said.

DevNet skills can also help network engineers to handle the cloudification of the enterprise.

“With fewer people actually in the office, the requirement is that everyone still needs secure access to files and applications. How do we handle that?” Clarke said. “There are DevNet skills that give you the ability to know the network and tie together cloud elements in an effective way for the business.”

The first day that candidates can test for the DevNet Expert lab exam is expected to be May 2 of this year.

Courtesy of: Michael Cooney

https://nscglobal.com/wp-content/uploads/2022/04/DevNet.png 158 319 ricardo https://nscglobal.com/wp-content/uploads/2020/10/nsc_global_logo_blue.svg ricardo2022-04-01 09:46:382022-12-08 10:42:19Cisco DevNet certs jump 50%, fanned by network automation

Microsoft Testing ‘Unsupported Hardware’ Watermark on Windows 11 Desktop

25th March 2022/in Latest Posts/by ricardo

Microsoft’s approach to Windows has long been to support as many devices as possible, even when they didn’t have the hardware to support an ideal computing experience. Things are different with Windows 11, which has a stringent set of system requirements — if you don’t have a relatively new CPU or a hardware security module, no (official) Windows 11 for you. If you install it anyway, an upcoming version of Windows 11 will nag anyone who circumvented those requirements with an “unsupported” watermark on the desktop. 

The watermark is not live in the current official build of Windows 11, so don’t go looking for it on your desktop. It has only just appeared in the new preview version of the OS for Windows Insiders, build number 22000.588. Several Twitter users have been seeing it since last week. The watermark is in the lower right corner, styled like a similar blemish that appears if Windows is not activated. However, this watermark will show up on systems with unsupported hardware even if the OS is activated. 

“System requirements not met,” the watermark reads. “Go to settings to learn more.” You could ignore this message if you don’t mind looking at it on the desktop every day. If you visit the settings menu, it will link you to Microsoft’s site to learn more about Windows 11 hardware requirements. However, if you forcibly installed Windows 11 on an unsupported computer, you probably know the requirements very well. 

Microsoft requires 4GB of RAM, 64GB of storage, and a 1GHz dual-core CPU. However, that CPU needs to be an 8th gen or newer on the Intel side and Zen 2 or newer for AMD. Systems also need a Trusted Platform Module (TPM 2.0). While Microsoft does not specifically stand in your way if you want to install Windows 11 on unsupported machines, it probably won’t work well. Microsoft says that devices without the right hardware experience more crashes, and they might not receive system updates on time. The company has not yet degraded the patches it offers “unsupported” systems and there is no evidence supporting its crash claims. 

For someone who doesn’t know their PC has unsupported hardware, this watermark could be an important clue. Those who know and understand the risks can continue to forge their own path and hide the watermark. You’ll have to modify the registry by changing the value of an entry called “UnsupportedHardwareNotificationCache.” There’s no guarantee Microsoft won’t find another way to shame you later, though. The version with the watermark has reached “release candidate” status, indicating it could come to the public release soon.

Courtesy of: Ryan Whitwam

https://nscglobal.com/wp-content/uploads/2022/03/Windows11.jpeg 353 640 ricardo https://nscglobal.com/wp-content/uploads/2020/10/nsc_global_logo_blue.svg ricardo2022-03-25 09:48:182022-12-08 10:42:19Microsoft Testing ‘Unsupported Hardware’ Watermark on Windows 11 Desktop
Page 1 of 9123›»

Industries

  • All
  • Administration
  • Art and Heritage
  • Construction
  • Financial Services
  • IT Services
  • Leisure
  • Marketing and PR
  • Technology
  • Telecoms
  • Travel and Transport

With us on your side anything is possible

Global business doesn’t have to be complicated.

We make it simple, with thousands of expert people, delivering to more than 100 countries.

We’re not afraid to make difficult decisions. We think creatively. We act quickly. And we do what it takes to transform your business – through IT process out-tasking, to infrastructure management, technology services and more.

We’ll carry as much or as little of the responsibility as you like. We know business isn’t linear. We expect to flex up, and down, with you.

We focus on building long term relationships with a small number of world-class clients. So when the pressure is on, we can provide a top-class service around the clock, around the world.

“As a long-term employee (16 years), I have found NSC to consistently be a dynamic environment to work in providing a variety of opportunities to increase my technical knowledge and develop soft skills.  The company provides a rich environment for collaboration and empowers individuals to take the lead in fulfilling customer projects.”

Kevin AnsellTechnical Architect

Get in touch – we’d love to hear from you.

Want to work together, know more about our services or just ask questions?

NSC Global Limited
3rd Floor, West Building
1 London Bridge London
SE1 9BG

+44 20 7808 6300
enquiries@nscglobal.com

Who we are

  • Who we are
  • Our People
  • Our Film
  • Our Story
  • Our Values
  • Charities

What we do

  • What we do
  • Supplying technology
  • Managing infrastructure
  • Managing people
  • Transforming business
  • Supporting operations
  • Planning for the future

Our partners

  • Our partners
  • Strategic partners
  • Vendor partners
  • Global partner management
  • Partner selection
  • Continuous improvement

Work with us

  • Work with us

© Copyright 2020 NSC Global Ltd.

  • Modern slavery statement
  • Privacy statement
  • Terms and conditions

Scroll to top

This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies.

Accept

Cookie and Privacy Settings



How we use cookies

We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.

Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.

Essential Website Cookies

These cookies are strictly necessary to provide you with services available through our website and to use some of its features.

Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.

We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.

We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.

Google Analytics Cookies

These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are, or to help us customize our website and application for you in order to enhance your experience.

If you do not want that we track your visit to our site you can disable tracking in your browser here:

Other external services

We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.

Google Webfont Settings:

Google Map Settings:

Google reCaptcha Settings:

Vimeo and Youtube video embeds:

Other cookies

The following cookies are also needed - You can choose if you want to allow them:

Accept settingsHide notification only