How To Know You’re Getting The Best Out Of Your Life Insurance Policy

With things getting more and more expensive in Australia, it is paramount to ensure you’re getting the best out of everything you pay for. Life insurance is simply something we see going out of our accounts every month, but what are we actually paying for, and how do we know we need it? No matter how many comparison sites we head to and people we speak with, the only thing that gets you your money’s worth with life insurance in Australia is finding a tailored solution. 

We want to walk you through a few things you should definitely consider when purchasing or reviewing your life insurance policy. Firstly, we will start with what a basic policy should cover, discuss some essential additions, and explain how you can avoid paying over the odds.

Basic life insurance policies: What it will usually cover

Standard life insurance in Australia is usually a pretty self-explanatory package. There are always a few things included that may not be essential, but every basic policy should at least include death cover, terminal illness cover, and funeral advancement benefit.

Death cover – This is the single payment that will be paid to loved ones after your passing.

Terminal illness cover – You are entitled to an early payment if you are diagnosed with a terminal illness. 

Funeral advancement benefit – This takes a portion of the final payout and releases an advanced payment to cover funeral costs. 

What else should be included in a good life insurance policy?

While the fundamental components of a basic policy offer security and peace of mind in case of death or critical illness, there are things you can do to comprehensively safeguard yourself and your loved ones. Australians may have some of these additions included in their work insurance policy or Super, but it doesn’t hurt being extra safe. 

TPD cover – This cover offers a lump sum payment if injury or illness causes a disability and you cannot work anymore. 

Income protection – This can subsidise a percentage of your salary if you cannot work.

Trauma – This cover provides a lump sum payment upon diagnosis of terminal illness or if you suffer from a stroke, heart attack, or something similar. 

Dependents – If you have children, it doesn’t cost much more to include them in your insurance package. 

One final tip to get the most out of life insurance in Australia

To put it simply, the best thing you can do to reduce costs is to get life insurance while you are young and do your best to be healthy. In Australia, multiple things can affect the price of life insurance, but the primary ones are age & gender, health & lifestyle, and work. Of course, the more you want out of your policy, the more the price will increase. It is essential to go with an Australian insurance provider you can trust, so do some research. We hope this article can guide you if you are considering purchasing or reviewing your policy. Remember, it may not always feel worth it, but life insurance may be the wisest purchase of your life. 

Source: TG Daily – How To Know You’re Getting The Best Out Of Your Life Insurance Policy

Sam Altman vs. the OpenAI Board vs. Microsoft: How to Kill the Golden Goose

I’ve been a technology analyst since the early 90s and I’ve seen company boards do a lot of stupid things mostly having to do with hiring the wrong CEO, then not recognizing the mistake soon enough to mitigate the resulting damage. Normally, the issue is that the board isn’t independent enough to do the job. But  firing OpenAI’s CEO Sam Altman sets the record for bone headed stupidity.  

OpenAI is a company of around 700 employees that was on track to become the most valuable company in the world, and in one move, the board of directors effectively killed it. Yes, it will take months or maybe more than a year for the company to fully fail, but already 500 of the 700 employees are threatening to leave due to this insane move unless Altman is brought back.

But now Altman can’t come back because, over the weekend, the OpenAI board didn’t agree with his reasonable (given what happened) demands, and now he is effectively building an OpenAI clone inside of Microsoft.  

In the end, this should be a huge boon to Microsoft, and it will certainly provide a more stable environment for both OpenAI employees and customers, but right now, OpenAI is effectively redundant. 

Let’s talk about that this week.

Killing the Golden Goose

There is an old fairy tale about a goose that laid golden eggs that I think defines what happened here. The OpenAI board was concerned about Altman’s direction. Whether that had to do with money or concerns that AI safety weren’t being properly addressed, the board moved decisively to remove Altman without first testing the waters in terms of how major clients like Microsoft would react (they were pissed), how investors would react, or how this news would be accepted by the employees, most of which are now planning to leave.

The kind of success that OpenAI has achieved is exceedingly rare. In terms of valuation, it was growing faster than any company of its type in history. You don’t muck with that kind of success, and you typically don’t get that kind of success unless a founder is involved.  

Altman and his team were critical to the success of OpenAI and to the continued operation of the company. More important is that there is a massive shortage of people that understand and can work with generative AI, so operational employees are incredibly valuable. The removal of OpenAI’s leadership put a recruitment target on their backs and made them far more willing to take the related call and consider leaving.

Microsoft vs. OpenAI

OpenAI is a relatively small company despite its massive, estimated valuation. This means its ability to hold off competitors was limited. The Microsoft partnership did mitigate this somewhat but the decision, which Microsoft was left out of, to remove OpenAI’s leadership was likely seen as a breach of trust by that company.  

To partially address this, Microsoft has hired Sam Altman to form what appears to be an OpenAI clone inside of Microsoft. Microsoft clearly has far more resources and is all in on ChatGPT, OpenAI’s AI product, and they should be far more capable of defending ChatGPT as a product than OpenAI could due to its size, reach, sales channels and government lobbying efforts. In effect, Microsoft is doing a very inexpensive (depending on any subsequent litigation) company acquisition where it’s only getting the people, but it has licensed the technology and should be clear to advance it.  

Given this was predicated by an ill-advised move by OpenAI’s board of directors, Microsoft’s $10B investment in OpenAI (which it may not want to divest), and the willingness of OpenAI’s employees to make a move to another company, this was an impressive coup. 

Microsoft moving decisively on what was likely a very tight window of opportunity is to its credit as, had it stopped to consider, Altman might have eventually either been hired by someone else or gone back to OpenAI.  

Wrapping Up: The Litigation Wave

I expect there will be an impressive amount of litigation that will result from these moves. Given Microsoft has a relatively large and well-regarded legal team, it should be able to navigate this reasonably well. However, investors in OpenAI are undoubtedly pissed as their board effectively killed their golden goose. 

There is a good chance that not only did OpenAI’s board end its company, but they may have ended their careers given the amount of visible damage this decision did to the firm and how it is now almost exclusively focused on this massive board mistake and not on Altman or anyone else.  

This is a good lesson in tactical vs. strategic thinking. The OpenAI board clearly hadn’t thought things through and decided to use power to get Sam Altman to do something he didn’t want to do. Then, not realizing they should have been bluffing, terminated Altman doing potential terminal damage to OpenAI.  

In short, Microsoft and Satya Nadella get praise for turning lemons into lemonade, and the OpenAI Board gets credit for potentially killing what might have been the most lucrative golden goose ever to have existed. 

Source: TG Daily – Sam Altman vs. the OpenAI Board vs. Microsoft: How to Kill the Golden Goose

Revolutionizing IT: Navigating the Transformative Impact of Edge Computing

Edge computing is emerging as complementary to centralized cloud models by distributing processing and data storage closer to endpoints. This helps drive faster response times for latency-sensitive devices and reduces back-end workloads. As IoT adoption grows across industries, edge infrastructure plays an increasingly vital role.

Processing Moves to the Network Edge

With edge deployments of micro data centers, virtual private servers instances, and fog nodes located near IoT sensors and user devices, organizations can now execute data processing, analytics and action responses locally at the edge rather than needing to transport all raw sensor data and computing back to centralized core data centers. By pushing compute and storage resources out to the network edge using technologies like micro data centers, organizations have the capability to carry out initial processing, filtering and analysis of data closer to where it is generated and where responsive actions are required.

This localized edge computing provides several key benefits to applications. Firstly, by processing data locally rather than transporting it long distances over the network to core data centers, applications avoid bandwidth constraints and experience reduced latency. This localized compute power allows data to be transformed into insights much closer to where, and when, it is captured without straining network capacity. Additionally, as the data no longer needs to travel to a distant core for processing, applications see significantly improved response times for any actions that may need to occur.

Edge computing therefore greatly enhances the performance of latency-sensitive applications, particularly in real-time domains like industrial IoT systems requiring fast diagnosis/control, mobile augmented reality needing immediate computation, and telemedicine relying on instant analysis. With edge resources and fog deployments, these time-critical applications that demand single-digit millisecond responses can now be efficiently supported by leveraging distributed processing at the network edge instead of transporting all data to a centralized core.

New Network Architectures Emerge

The edge computing model necessitates architectural changes, with some centralized systems connecting many distributed edge sites. This gives rise to hybrid connectivity architectures that incorporate:

  • Backhaul links between edge locations and core data centers using high-bandwidth networks to upload aggregated insights and push updates.
  • Low-power wide-area radios deployable anywhere for long-range connectivity between geographically dispersed edge nodes and fog servers.
  • Cellular technologies like 5G acting as “foggy clouds” that can seamlessly integrate edge infrastructure at the network level, whether in industrial machines, buildings or telecom towers.
  • Local area networks within factories, offices and cell towers facilitating fast communication between co-located edge devices, gateways and controllers.
  • Edge servers, fog nodes and micro data centers distributed across wide footprints yet integrated centrally for management and oversight.

With such diverse infrastructures, edge traffic originating from IoT, mobile and other latency-sensitive systems can often be processed, routed and managed locally without needing to return to centralized core networks. This distributed architecture improves resiliency by preventing single points of failure and helps scale overall throughput by balancing load locally across edge resources. Application response times and network bandwidth utilization benefit as core infrastructure remains unburdened by real-time data transmissions and processing.

Management Challenges at the Edge

As the number of edge nodes proliferates, it introduces new operational challenges for IT teams. Edge sites are often in small remote locations with unreliable or intermittent connectivity. This can make tasks like patching, security updates, and remote hands management of edge hardware more difficult compared to centralized data centers.

However, VPS provides a scalable and cost-effective way to deploy edge computing resources. Their virtualization allows optimizing available hardware to host multiple edge workloads. Organizations can leverage VPS at the network edge to execute latency-sensitive functions closer to endpoints, improving performance for real-time applications. This localized compute using VPS addresses bandwidth constraints. You can try a virtual private server today: choose a location, for example, Bulgaria, and place an order, after which you will be able to fully experience the possibilities of this type of hosting.

To help address these issues, standardized hardware deployment across edge tiers assists in simplified life-cycle management. Tools for integrated monitoring of decentralized infrastructure also improve visibility. Additionally, edge demands simplified orchestration software compared to data centers since automating tasks like patching, backups and scaling must function reliably even in low-bandwidth environments.

In summary, edge computing is revolutionizing IT infrastructure design by pushing data processing closer to endpoints. This improves performance for latency-sensitive applications and reduces back-end load, though standardized edge management must still evolve to scale decentralized systems.

Written by Scott Weathers

Source: TG Daily – Revolutionizing IT: Navigating the Transformative Impact of Edge Computing

Microsoft Copilot: Enabling the Disabled and Supercharging Kids

This week I’m at Microsoft Ignite. There was a video at the end of the second keynote that spoke to how Copilot could be used by folks who were severely disabled to help make them productive. The video featured a guy in Ukraine where the massive number of civilians who have been critically injured need technologies to help their now heavily injured and disabled population return to work.  

This made me think about how this technology could change education and result in kids who enjoyed school more and could develop skills that would put them in high demand when they graduate regardless of the industry they go into.  

This technology is a massive game changer, and the operational sessions were standing-room-only as developers and service providers race to understand and learn to deploy it effectively.  

Helping the Severely Disabled

If you watch the video I linked to above, you’ll see a guy who would have been severely limited in his ability to earn a living as a developer. He clearly knew the technology and had the mental acuity to do an excellent job, but he was so physically compromised that his normal typing speed was reduced to a few letters a minute. Even a short utility program that might take someone minutes to create would likely take him hours or even days. He was also speech impaired, so simple voice-to-text technology wouldn’t work, either. 

Copilot is a Large Language Model (LLM) that is now being applied across Microsoft’s product set, ranging from productivity and communications tools to coding tools. It is an increasingly advanced AI that can determine intent and then execute against that intent automatically.  

As a result, it was able to not only mitigate the disabled developer’s disability but provide capabilities that a developer without a disability not using Copilot wouldn’t normally have in terms of immediate error checking and code optimization. 

In fact, as this technology advances, it may allow extremely disabled people whose minds are still intact to outperform any other developers who don’t have access to this technology. 

This is huge because most of us don’t want to be a burden on our families. If you were born with a disability or were disabled in a war zone, you’d tend to question whether you should even be alive. Your existence, which was already painful, would lack meaning and, depending on your caregiver, would be incredibly lonely. 

With this technology, the disabled can not only work effectively and mitigate their inability to work but be able to better communicate and collaborate with others and gain what we all often take for granted: a purpose in life, not to mention that society gets another productive member rather than a lifelong dependent. The disabled person is better off, their families are better off and society is better off as a result.  

Given Copilot is being applied to every industry, its impact on the victims in Ukraine would be huge, not only helping them have meaningful lives but helping the country post-war by shifting dependents that were created into productive members who can help rebuild the country and its economy. In fact, once that war is over, I expect Ukraine will be one of the leading technological countries with expertise in disaster recovery and low-cost autonomous weapons. Like Japan after World War II, Ukraine could become far more capable than it was before the war.


As I watched the video, it occurred to me that kids are often far more creative and imaginative than adults are, but they lack the skills needed to create the art, fill out the stories, and communicate the concepts that their less restricted minds might otherwise create. A child raised with a tool like Copilot will be capable of creating things that an adult might never think of because, as we age, our imaginations become more restricted and our fear of failure more pronounced. 

Think of kids creating programming for other kids, creating art that isn’t yet corrupted by the environment and coming up with ideas and concepts that people believe are impossible but aren’t. And as they advance with AI, once they become adults, they’ll have a depth of understanding regarding how to use AI that should eclipse what anyone from a prior, non-AI enabled generation might be able to do. 

It would be the difference between someone that started dancing or playing an instrument at an early age and someone that learned later in life, but their instrument will be AI. They’ll lack the fear of using something new because, to them, it has always existed. What they create as kids and adults will challenge our current limitations and make it far less likely they’ll boomerang back home once they become adults.  

Can you imagine what an AI Savant would be worth in an AI-driven world? And using AI is a lot of fun. It engages the mind and sets the child up for a far more successful and lucrative future.  

Wrapping Up:

When I came to Microsoft Ignite, I knew Copilot was going to be a game changer. But seeing that developer in Ukraine made me realize it would be far more than that for the severely disabled who are unable to work or even communicate. Applied broadly, this could be a huge benefit to our disabled soldiers, those born with severe defects, or those who have been injured in accidents and might be looking at a depressing and unproductive future. And we are still at the beginning of this AI wave. Imagine what the world will be like in 5 to 10 years when this technology begins to approach its full potential. The world is going to be very different with the firms and countries that have embraced this technology rapidly outpacing those that do not.   

AI will make us better, faster and much more productive, and make a massive difference in quality of life. That alone would change the world for the better, raising voices and imaginations that are currently unheard to true powers for positive change.  

Source: TG Daily – Microsoft Copilot: Enabling the Disabled and Supercharging Kids

Could AI Eliminate Bias In Recruitment – Or Make It Worse?

What happens when human resources is no longer run by humans? We may soon find out. Artificial intelligence (AI) tools have been creeping into our working lives for the past few decades via process automation, data analysis and virtual chatbots. But it wasn’t until ChatGPT entered the market in 2022 that questions really started being asked about the future of mankind’s involvement in long-standing business activities — recruitment included.

AI models are now widely leveraged to carry out duties like copywriting and coding. But could they really find their way into hiring practices? AI has been touted to streamline sifting and increase interviewing efficiency — but the jury’s out on whether these intelligent tools could eliminate bias in recruiting or exacerbate it further.

Here, we’ll explore some of the challenges surrounding AI and its implications for recruitment.

What is bias in recruitment?

Bias in recruitment refers to unfair and prejudiced attitudes or preferences that influence hiring decisions. These biases can be conscious or unconscious — they are generally based on a candidate’s individual characteristics, such as race, gender, age, ethnicity, disability, or sexuality.

They can also manifest as ‘affinity bias’, the tendency to favour people with similar interests, backgrounds and experiences as us.

It’s widely understood that all people are affected by some form of unconscious bias. For example, we might refer to associations and stereotypes to ‘fill in the gaps’ when trying to get to know somebody new, or treat them more kindly if we see similarities to ourselves in them.

However, this can have detrimental effects for hiring practices.

Why is bias in recruitment an issue?

Bias in recruitment has a number of consequences. It can lead to the unfair exclusion of qualified candidates, reinforcement of harmful stereotypes, and limitations on workplace diversity.

And this is a problem. Diversity consultants EW Group explain that “workforce diversity has a measurable effect on business performance”, on account that “diverse teams are more innovative, more productive and perform better in problem-solving”. As a result, many organisations are striving to promote fair and inclusive hiring practices for better business as well as ethics.

Some companies aim to identify and mitigate recruitment bias through standardised processes, diverse interview panels, and dedicated recruitment and selection training.

How could AI help reduce bias in recruitment?

According to a 2022 study, 65% of recruiters already use AI in the hiring process. But how is it being implemented?

Advocates claim that AI can reduce bias in recruitment by introducing objectivity and consistency into candidate screening. Talent acquisition platform Jobylon posits that “AI is able to screen candidates objectively based on factors such as qualifications and experience without relying on subjective factors such as age, gender, and race” — while human recruiters are inherently influenced by these.

It’s also suggested that AI could help craft unbiased job descriptions by identifying and suggesting neutral language to use.

By minimising human intervention and relying on data-driven decision-making, AI may have the potential to reduce bias and precipitate more equitable hiring outcomes for organisations.

What are the risks of leveraging AI in recruitment?

But in spite of the prospective benefits, AI also carries inherent risks, including the potential to exacerbate biases.

One concern is that AI algorithms may ‘inherit’ biases when they are trained with historical data. If previous hiring decisions were skewed against specific characteristics, AI systems trained on this data may inadvertently perpetuate those same biases.

For example, if women have been historically underrepresented in a company’s workforce, the AI system trained on previous hiring data may lean in favour of men who resemble the existing workforce, further marginalising women.

In one study, researchers from Cambridge University argued that leveraging AI to reduce recruitment bias is counter-productive. In a statement to the BBC, Dr Kerry Mackereth explained that “tools can’t be trained to only identify job-related characteristics and strip out gender and race from the hiring process, because the kinds of attributes we think are essential for being a good employee are inherently bound up with gender and race”.

Additionally, AI algorithms can inadvertently identify proxy variables correlated with certain characteristics — for example, a postcode or school attended to indicate socioeconomic class. This means they could indirectly introduce biases into decision-making — even in ways that human hiring managers don’t.

The takeaway

There’s no doubt about it, artificial intelligence is here to stay — and it certainly has useful applications in the recruitment process. However, it’s crucial that it doesn’t go unchecked by a human. Ultimately, AI’s effectiveness in eliminating bias depends on the quality and diversity of the data it is trained on, as well as the ongoing monitoring by host organisations to ensure fairness.

When leveraging AI, HR teams should be prepared to regularly update training data and employ techniques to de-bias algorithms. Only then can they ensure that AI truly contributes to fair and inclusive recruitment processes, rather than reinforcing existing biases.

Written by Adam Eaton

Source: TG Daily – Could AI Eliminate Bias In Recruitment – Or Make It Worse?

Rory Brown Details How Lydian’s Influence and Legacy Survives Today

Like many cultures throughout history, the Lydians didn’t survive but were eventually assimilated into another society following defeat — in this case, by the Persian Empire. Even so, according to enthusiast Rory Brown, Lydians left a lasting mark on history during their roughly six centuries of existence, dating from about 1180 B.C.E. to the middle of the 6th century B.C.E. 

While very little in the way of firsthand Lydian history remains, historians of the time, most notably Herodotus, wrote about Lydian culture and society, and artifacts unearthed from Iron Age burial mounds help to paint a picture of this once-thriving civilization. Here, Rory Brown explores the lasting influence and legacy of these ancient people.

Overlap With Greek Society

Greek and Lydian societies, both prominent during the Iron Age, shared a range of cultural features. While not well understood due to few existing samples, Lydian writing is similar to Greek, although it has distinctive characteristics.

Greeks and Lydians were both polytheistic, and Lydian religious figures like Artimu, Pldans, and Baki strongly resemble Greek counterparts Artemis, Apollo, and Dionysus, respectively. Sharing between the two cultures and limited first-hand historical accounts from Lydia makes it hard to know where some concepts originated and who adopted what.

Among the most notable exchanges was the Greek assimilation of Lydian commercial practices. Lydians are credited with being the first culture to establish retail shops and a monetary system featuring gold and silver currency in the form of coins. The Greeks adopted these practices in the 6th century B.C.E., sparking the Greek commercial revolution.

The royal emblem displayed on Lydian coins featured lion iconography, starting with two lion heads and later switching to just one. Some of the earliest archaic Greek coins also featured animals and images of religious figures like Athena.

Influence on the Persians

Croesus, the last of Lydian kings, was known for extravagant wealth and a penchant for war. Upon coming to power, he immediately started attacking the Greek empire, conquering the western half of Asia Minor before eventually setting his sights on the threat of the expanding Persian Empire.

Allying with Egypt, Babylonia, and Sparta, he launched an attack on Persia, which ultimately led to Lydian defeat. Lydia was then integrated into the Persian Empire, although the area would fall to Alexander the Great and Roman rule just two centuries later.

That said, the Persians adopted one aspect of the Lydian culture of particular interest to Rory Brown: Lydian use of rare metal coins as currency. Almost immediately following Persian success, Cyrus the Great introduced coinage to the Persian people. Interestingly, this was roughly a century after the Greeks adopted the practice.

Modern Impact

Although the duration of the Lydian civilization was relatively short in the span of human history, there’s no denying the lasting contributions this society produced. Not only were Lydians an integral part of the ancient world, sharing concepts and customs with neighboring cultures, but their influence also remains in countries across the globe that still rely on coinage as a form of currency today.

About Rory Brown, Lydian Enthusiast

Rory Brown is a Managing Partner at Nicklaus Brown & Company. He is the Executive Chair of Goods and Services at Blueriver. Providing excellence in the industry for over two decades, Mr. Brown was chosen as the Financial Services Entrepreneur of the Year by Ernst & Young, and he has founded several companies in the Inc. 500. Mr. Brown researches the Lydian Kingdom and ancient currencies in his spare time.

Source: TG Daily – Rory Brown Details How Lydian’s Influence and Legacy Survives Today

IBM and Why it Was Important to Move From DIY to Prescriptive/Automated HA/DR

One of the things we’re learning this decade is that we aren’t at all prepared for the many manmade and environmental disasters that are emerging word wide. It has become very clear that for critical systems that must remain viable during these disasters, DIY (Do It Yourself) just didn’t work. As is often the case, the DIY workarounds weren’t even less expensive. Such efforts don’t usually build on prior experience. As a result, the firm doing the implementation learns on the job, which is arguably the most expensive way to learn about any complex thing you are creating. 

IBM just released a comprehensive blog post surrounding its Power platform by Tom McPherson (GM for IBM Power) in which he indicated that IBM recognized that the DIY thing just wasn’t working for critical HA/DR (High Availability/Disaster Recovery) events. In those cases, you need a more prescriptive and automated solution that works out of the box and can work even if you can’t get staff into the site (thus the need for automation). There is a bit of old IBM in this change, and that is a good thing.

Let’s talk about that this week.

The HA/DR Problem

The big problem with High Availability Disaster Recovery systems is that they need to be bulletproof and able to operate for extended periods without human intervention. The last thing you need when you are already dealing with some kind of localized or global disaster (like a pandemic) is to have to go to the office and make a service call or have people on premises that should have been evacuated. 

You need systems consistent with that old Timex wristwatch slogan: they can “take a licking and keep on ticking.” These systems will be used to keep the company and government running during a disaster and will be critical to providing the services companies and people need during a disaster.  

Many if not most of the existing systems were built from parts by people who wanted to save money and do it themselves. But in most cases, these people are just making assumptions and guesses about what is needed. They don’t have the depth of experience and understanding of a global vendor that has run into problems like this before and can come to the table with compelling solutions from what they learned.  

IBM’s Advantage

Given IBM’s Mainframe and AS400 background, IBM understands high availability. A few years back there was a site in which constructions workers opened up a wall and found an AS400. It hadn’t been touched in years, but it was still running. These machines had several usability issues when compared to current generation hardware, but one thing they did extremely well was keep running and require lower levels of maintenance than their alternatives.  

IBM is historically expert on what is needed to create a HA/DR solution that is more purpose built and automated, and the company showcases that experience. IBM knows that the only way to create a true solution in this unique space is to use all the company’s experience from supporting firms, and itself, in disasters to create a solution that could survive those disasters even if people are not coming into work.  

Wrapping Up:

In his recent blog post, Tom McPherson pointed to IBM’s pivot from DIY approaches to the far more reliable, high-performing appliances like plug and play alternatives. By making this move, IBM is showcasing decades of understanding products like this and its unmatched experience in providing related solutions.   

This is why banks favor IBM. Financial institutions can lose millions in mere seconds of downtime. IBM’s reliability is even more critical for infrastructure and communications during disasters and for efforts related to national defense, two areas in which IBM is also well known for providing uniquely robust solutions. 

In the end, I doubt there is any company (outside of perhaps a specialist in the segment) that would better understand the needs for a HA/DR solution. As this solution rolls out, I expect the firms that use it will be praised for their prowess while the firms that don’t will be criticized for being down at those critical times.  

Source: TG Daily – IBM and Why it Was Important to Move From DIY to Prescriptive/Automated HA/DR

Top 10 of the Best Companies for Software Development Outsourcing in Argentina

Outsourcing your work to IT companies in Argentina can be a great idea and it can help you speed up your workflow. The value you can get from outsourcing can be incredible, and it always comes down to pushing the limits and bringing in an excellent result. With that in mind, it always makes sense to find a good outsourcing business, and we are here to assist.

3XM Group

3XM Group is widely known for their focus on the latest technologies. The company offers custom software development, staff augmentation and product development. It’s an excellent option to consider, and the value you can obtain is nothing short of incredible in the long term. 


BairesDev has a large talent pool of over 4000 developers. That means they can easily scale according to your needs and requirements. The value you obtain is second to none, and you will find them to deliver an excellent set of benefits all the time. 


NaNLabs was created over 10 years ago, and it has great full stack developers. They also include consultancy services, aside from the regular software development solutions. 


Azumo has a true focus on AI and cloud services, but also data, web and mobile. It encompasses everything you need from development outsourcing, along with state of the art solutions. 

Svitla Systems

Svitla Systems has locations in Argentina and Mexico too. They do an excellent job at innovating, while introducing IoT, AI, QA and consulting, along with development services. They are very efficient and convey an excellent value for money.


Artelogic specializes in custom development and they also have a huge range of services. They are easy to access and have teams in Argentina too. On top of that, their quality is excellent and they know how to deliver exceptional value.

Innowise Group

Innowise Group does an excellent job when it comes to completing projects of any complexity within the IT world. They are known for handling every step of development, while allowing you to save a lot of time and money. It’s an amazing solution for anyone that wants comprehensive, professional services. 

DePalma Studios

DePalma Studios is also a company that’s very good at development outsourcing. They are veteran owned, and they can help you with a vast number of different solutions. 


Virtualmind is close to 20 years old in this industry and they can help with mobile and web development. Virtualmind can also assist with custom development, along with UI and UX design services, among many others.


POSSUMUS is a popular IT staff augmentation business, and it’s also present in the US. They can assist with ecommerce and custom software development along with a variety of other services and solutions. 

As you can see in this article, Argentina has many great software development services. Some of them are completely local, others are just a part of a larger brand. Regardless, you will always have access to great services and exceptional results, not to mention it becomes a lot easier to outsource any type of project quickly and with great ease.

Written by Scott Weathers

Source: TG Daily – Top 10 of the Best Companies for Software Development Outsourcing in Argentina

AMD’s Amazingly Strong Q3 Results

AMD continues to showcase the benefits of focused management and solid execution. This quarter, (3rd quarter 2023), represents impressively strong results in what has been an uneven market with nervous customers and ever higher interest rates collectively creating a drag on sales outside of AMD’s control. And this wasn’t a one-shot, either. AMD’s outlook is also very positive and aggressive. Given AMD CEO’s Lisa Su tends to be conservative in her outlook, that is a good thing. 

Let’s talk about some of the high and lowlights of AMD’s financial results.

Data Center

AMD’s data center efforts were particularly powerful in what has been a relatively soft market due to high capital costs and concerns about consumer demand (which has stayed surprisingly strong).  While the related revenue was flat year-over-year, it was up 21% sequentially, showcasing impressive sales performance last quarter in a soft market. AMD reported double digit growth in the Cloud. 

In addition, AMD reported that Dell, Lenovo, Supermicro and several others launched new EPYC CPU platforms, increasing the total available market for the parts in telco, retail and manufacturing applications. 

EPYC has been doing surprisingly well and I’ve heard from several OEMs that they are impressed not only with the part but with AMD’s execution over the last 5 to 10 years. While other firms seemed to struggle through downturns and were forced to downsize, AMD just plugged along, and this resulted in consistency, reliability and increased trust between AMD and the OEMs it services. 

AMD’s MI300X AI-focused GPU platform also saw a huge uptick to around $300M against an earlier estimate closer to $200M.  MI300a GPUs weren’t material in volume given they just started production recently, but demand is reported to be high and projections for these products are expected to be very strong well into 2024, especially given AI demand is through the roof.  

AMD got some good news from Lamini which announced the achievement of software parity with NVIDIA’s CUDA when running AMD’s MI250 AI GPU accelerators. Since the newer parts perform at even higher levels, competition in this segment is clearly speeding up.

As a result, AMD expects that the MI300 family will be the fastest product to ramp up to $1B in revenue in AMD’s history. 

PC Business

AMD hit the ball out of the park in this segment with annual revenue growth (year-over-year) of a whopping 42% (46% sequentially) doing $1.5B in related revenue. It drove down inventory levels sharply as the Ryzen 7000 was particularly strong.  Client CPUs powered by the Zen 4 core increased by more than 100% sequentially and AMD continues to dominate in the high-end workstation segment with its new Threadripper Pro workstation CPUs (also based on the Zen 4 core). These Threadripper Pro workstations are amazingly powerful. Bringing them to market represented a substantial risk that obviously paid off. Again, this shows that it often takes big risks to move against an entrenched powerful competitor and Intel is both. Dell, HP and Lenovo (who pioneered these machines) have all announced new workstations based on the latest version of Threadripper.

Looking forward, AMD is in lockstep with Intel and Qualcomm about AI and hybrid AI growth in the enterprise and anticipate a huge amount of demand once buyers fully understand the productivity increases associated with using advanced generative computing solutions like Microsoft’s Copilot.  


Gaming took a hit with an 8% decline largely due to aging gaming consoles. This is unlikely to recover until the consoles are refreshed but a full refresh isn’t due until 2025/26 (though Microsoft is expected to have an update in 2024 which could result in more demand next year for that platform).  

It’s just the nature of the console market, which is a razor and blade market. The consoles are sold at or near cost and the vendors make their money off content. There is no real incentive to fast refresh, so you get these typical hills and valleys which AMD has no control over.  

Wrapping Up:

AMD had an impressively strong quarter due to high focus, impressive execution, and willingness to take risks with products like Threadripper Pro (I still wonder what would happen if it had created a similar product in its graphics line.) Since engineers and gamers chase performance, the upside should be equally fascinating were this done like Threadripper was done.  

In the end, AMD leaves the quarter stronger than when it began and well positioned for the coming wave of hybrid AI and the amazing things we’ll be doing with these LLM (Large Language Models) for our near-term AI future. 

Source: TG Daily – AMD’s Amazingly Strong Q3 Results

The Advantages of LLP Structure for Small Businesses: A Comprehensive Guide

The Rise of LLPs for Small Businesses

In the dynamic world of business, selecting the right structure is a pivotal decision for any entrepreneur. Limited Liability Partnerships (LLP) have become increasingly popular, particularly among small businesses, due to their unique features and the numerous advantages they offer. In this comprehensive guide, we’ll delve into the world of LLPs, exploring their features, the LLP registration process, and the manifold benefits they bring to small businesses.

Understanding LLP: Defining a Limited Liability Partnership

Before we explore the myriad advantages of an LLP structure for small businesses, let’s clarify precisely what an LLP is. A Limited Liability Partnership is a distinct legal entity that uniquely combines features from both partnerships and private limited companies. This innovative structure marries the liability protection of a private company with the operational flexibility of a partnership.

The Features that Define a Limited Liability Partnership

One of the standout features of an LLP is its limited liability protection. This means that the personal assets of partners are safeguarded, and their liability is restricted to their agreed-upon contribution to the business. This protection is particularly attractive to entrepreneurs and small business owners who seek to separate their personal finances from business obligations.

Delving Deeper: The LLP Registration Process

The process of LLP registration is relatively straightforward, making it a practical choice for small businesses. Here’s an in-depth look at the steps involved:

  • Choosing Partners: To embark on the LLP registration journey, you must select at least two designated partners with valid Director Identification Numbers (DIN) and Digital Signatures (DSC).
  • Name Reservation: The first significant step is choosing a unique name for your LLP that complies with the guidelines of the Ministry of Corporate Affairs (MCA). Once approved, the name is reserved for a period of 90 days.
  • Document Submission: The next step is to prepare and file the necessary documents with the Registrar of Companies (RoC). These documents include the LLP Agreement, DIN and DSC of partners, and the consent to act as partners.
  • Financial Considerations: The process involves the payment of applicable fees for LLP registration, which varies depending on the capital contribution.
  • Certificate of Incorporation: Upon the successful scrutiny and completion of these steps, the RoC will issue a Certificate of Incorporation, signifying the finalization of the LLP registration process.

Unveiling the Advantages of LLP for Small Businesses

Now that we have covered the fundamentals let’s explore the various advantages of LLPs, especially as they relate to small businesses:

  • Limitless Liability Protection: As previously mentioned, one of the primary draws of an LLP is the limited liability it affords partners. This unique feature ensures that the personal assets of partners are safeguarded, reducing risk and promoting a more secure business environment.
  • Streamlined Formation: The streamlined registration process is a significant advantage for small businesses and startups. It translates to reduced administrative burdens and lower associated costs, making it an accessible choice for budding entrepreneurs.
  • Tax Efficiency: LLPs are taxed as partnerships, with profits being taxed at the partner level. This results in tax efficiency, often reducing the overall tax burden on the business and its partners. The pass-through tax treatment means that the business itself is not taxed on its profits, a considerable advantage compared to corporate taxation.
  • Operational Flexibility: LLPs offer flexibility in management and operation, allowing partners to have more control over decision-making processes. This operational freedom can be particularly advantageous for businesses where adaptability and responsiveness to market changes are critical.
  • Business Credibility: LLPs are considered more credible and trustworthy in business transactions. This credibility can be a significant advantage when attracting clients and partners who may be more inclined to engage with a business structured as an LLP due to its limited liability protection and transparency.
  • Reduced Compliance Burden: LLPs have fewer regulatory requirements compared to private limited companies. This means less time and resources spent on meeting compliance obligations, allowing businesses to focus more on their core operations and growth strategies.

Unpacking the Taxation Aspect: Tax Rate for LLP

The taxation structure for LLPs is a significant advantage. Profits are taxed at the partner level, and this is often more tax-efficient than corporate taxation. LLPs benefit from the pass-through tax treatment, which means the business itself is not taxed on its profits. Partners are taxed based on their share of the profits, making it a tax-efficient structure for businesses, including small and medium-sized enterprises.


Choosing the right business structure is a pivotal decision that significantly impacts the growth and success of any enterprise. For small businesses, Limited Liability Partnerships offer an appealing blend of features, combining limited liability protection with tax efficiency, operational flexibility, credibility, and reduced compliance burden. The streamlined LLP registration process makes it an accessible choice for small business owners looking to separate their personal assets from their business operations.

Understanding the advantages of LLPs and how they apply to your business is essential. Whether you’re a startup or an established small business, the LLP structure could be the key to unlocking your business’s full potential. It’s a structure that combines the best of both worlds, offering limited liability protection and operational flexibility.

If you’re considering an LLP for your small business, consult professionals who specialize in the field to ensure a smooth registration process and a clear understanding of the structure’s benefits such as Filing Buddy. Expert guidance can be a game-changer for your small business, enhancing its credibility, protecting your assets, and optimizing your tax structure.

Should you have any further questions or need assistance with LLP registration, don’t hesitate to reach out for further information or expert guidance. Your success is our priority, and we are here to support your journey as a small business owner.

Written by Lara Harper

Source: TG Daily – The Advantages of LLP Structure for Small Businesses: A Comprehensive Guide

Qualcomm Snapdragon X Elite:  Your Personal AI and the Laptop Revolution

This week I’m at Qualcomm’s Summit 2023 where I get to work while my wife gets to enjoy Hawaii. But given I love tech more than I love swimming or sunbathing, I’m okay with this. This year, Qualcomm’s event is all about AI, whether it is new smartphones like the amazing Xiaomi 14 launching this week, or PCs like the coming batch of category-blasting products based on the Snapdragon X Elite 2024 (2024 is when the PCs using this technology will show up), the potential for AI disruption has never been greater, and Qualcomm has jumped into this technology with both feet.

Let’s talk about what is coming with Snapdragon X Elite laptops this week.

A Revolution in Laptops

Qualcomm is the major player in smartphones and has promised a massive improvement with Snapdragon 8, but the new Snapdragon X Elite solution has made the biggest jump. My favorite laptop is the last-generation HP Folio which, while a bit slow in use, sported almost unbelievable battery life and a design that had me leaving my backpack at home or at the hotel most of the time. Having all the battery life you need made me want to scream “freedom” every time I used the device, so I’ve been looking forward to the next-generation part that would address the performance issue and at least make it competitive with products using chips from other providers.

As most know, the ARM-based Apple M2 chip has been surprising the market with its market-leading efficiency and adequate performance, leading many to conclude that Apple laptops using this unique part are the best in the market. Well, Qualcomm represents this latest chip that outperforms the Intel I9 monster chip while using 30% less energy than Apple’s M2 chip. That is so far beyond my expectations that I’m truly looking forward to trying this myself.  

Be aware that there are huge improvements coming from AMD and Intel as well, but these changes likely wouldn’t have happened if it weren’t for Qualcomm driving this change. It is very unusual for a vendor that isn’t in the PC segment to wake up the vendors who are, but this entire segment is about to get one hell of an adjustment thanks largely to Qualcomm and Microsoft which collectively are driving this move to create solutions that are better on paper than what Apple has. I expect Apple won’t be standing still either. 

Wi-Fi 7 and 5G are part of this platform, and the laptops will have things like screen sensing to keep your stuff private. Think near-instant photo editing, AI on the desktop with a massive AI performance boost, and the ability to not only use AI to alter pictures but being able to use AI to identify when AI was used to alter pictures (so you can better determine whether someone is trying to trick you).

Large Language Models (LLMs) like ChatGPT and Microsoft’s amazing Copilot running natively on your laptop helping you edit, create, and alter content at unheard of speeds and being able to have a conversation with your PC are all coming with these new PCs that promise to change dramatically both how we use these machines and what we use them for. They’ll be able to run LLMs with 13B+ parameters and still be able to respond near instantly, which is another amazing advance for this new platform.

Higher integration with tablets and smartphones allows the seamless use (the feature is called Snapdragon Seamless) of all three products together as combined screen real estate.  

Wrapping Up: Holy Crap

That is really the only comment that came to mind as I was being briefed on this platform. Talk about a revolution! Laptops with better battery life than most smartphones that significantly outperform  most non-gaming laptops for typical tasks and that outperform even gaming laptops with AI is pretty much unbelievable, but Qualcomm is known for doing the unbelievable, so this shouldn’t be a surprise (though I was, frankly, very surprised).  

This year in Hawaii at the Qualcomm Summit I saw the future, and I can hardly wait until next year to try out for myself the coming Snapdragon X Elite laptops as they drive a revolution into the laptop market worldwide.   

Source: TG Daily – Qualcomm Snapdragon X Elite:  Your Personal AI and the Laptop Revolution

Lenovo vs. Dell: Size vs. Focus and Synergy

I’m at a fascinating event this week at Lenovo. For the first time, Lenovo brought all the analysts that cover the company together as a group not to talk about each individual Lenovo product or service, but to chat about how Lenovo is executing its One Lenovo plan and creating an Apple-like experience that drives a “better together” concept. It’s the basis of why the company has become such a huge global success.  

This is in contrast to Dell which has also been built from similar components but still appears largely siloed and defined more by internal conflicts than being a showcase of how to bring the full power of the company to bear on what remains a troubled market that’s struggling with how to deal with generative AI, foundation models, the metaverse, and whether employees come to work or work from home.  

The contrast between the companies reminds me of my time at IBM when I could see what IBM was and how it originally became dominant, but experienced what IBM became while I was there in the 80s and 90s which was more a dysfunctional family that had lost market focus and customers and was struggling to keep the lights on.  

In short, I see Lenovo transitioning to what IBM was at its peak, and Dell transitioning to what nearly put IBM out of business. And Dell is hardly alone in this trend. I’m somewhat of an expert on IBM’s decline because I got so frustrated with what was happening back then I researched and wrote a report on the causes of IBM’s decline, which had me both up for an award and nearly got me fired. 

The Problem with Umbrella Companies

IBM was and both Lenovo and Dell are umbrella companies. An umbrella company is a firm often, but not always, formed through acquisitions that places each somewhat independent unit under a single brand. From the outside, it looks like one company, but inside, problems emerge when each unit competes with each peer unit for resources while the executives compete in terms of clout, status and the likelihood to lead the combined unit.  

Customer needs tend to be subordinate to the conflict. You can see this play out in the market as Lenovo gradually has advanced on Dell and now tops that company in a growing number of areas, largely due to bringing the full power of the company against a growing list of customers that increasingly want suppliers to reduce, not increase, complexity in an increasingly complex world. 

In short, the reasons that IBM nearly failed and Lenovo is out-executing Dell are tied to customer focus and using the power of the parent company so that the umbrella company becomes a competitive advantage instead of a disadvantage against smaller, more focused, efforts. 

It Starts with the CEO

In looking back at IBM, that company started to break when Thomas Watson, Jr. retired. Until then succession appeared to be in the family so you had a leader that no one was likely to challenge and who could say “jump” and everyone in the company would ask how high. This is what Dell used to have when Michael Dell was more active and what Lenovo still seems to have. At Lenovo, the leadership is crystal clear. At Dell, not so much.  

Succession is always an issue, but it only becomes a big issue should the CEO leave or when leadership is uncertain. What I’m seeing at Lenovo is that leadership is as strong as it was at IBM under Thomas Watson, Jr. (or Sr.). At Dell, leadership started to degrade when Michael Dell first tried to retire in the early 90s. Now it seems divided across a number of people while Michael Dell seems to be taking a far less active role.

This lack of focused and singular leadership is problematic in an umbrella company because each division leader then attempts to fill the leadership vacuum, and the resulting lack of a global authority and vision means they are often working at cross purposes rather than acting like they are on the same team.  

Wrapping Up: Lenovo’s Competitive Advantage Is CEO Yang Yuanqing

While this starts with a clear and growing problem at Dell and uncertain leadership, Lenovo may experience this problem in the future should its current CEO either repeat Dell’s mistake or depart without assuring a similar concept of leadership that he enjoys. The Watsons clearly didn’t anticipate well enough what would happen when the family no longer led the company, and companies that remain in business over 100 years are the exception, not the rule.  It is not uncommon for a firm to lose its way when a strong leader is replaced by someone that lacks experience, leadership skills, implicit/explicit authority, or market understanding.

This may be the strongest argument for why AI needs to focus on assuring the CEO job because that isn’t only the most highly compensated position, but it is the one position that, when it degrades, can put the company into decline. CEOs like Michael Dell, Yang Yuanqing, and Thomas Watson, Jr. are rare, yet we do little to assure the aspects that made them successful are sustained after they leave. Perhaps the most powerful use of AI will be to assure that a CEO who is exceptional is also virtually immortal, then use this to create that same dynamic so that every exceptional employee is also virtually immortal, thus ensuring the immortality and success of the company. 

Source: TG Daily – Lenovo vs. Dell: Size vs. Focus and Synergy

AMD Moves to Acquire to Build Out AMD’s AI Solution

Every chip vendor is in a race to provide the best hardware to compete in the massively accelerating AI market. While hype may be a bit ahead of reality now regarding AI capabilities, the speed of AI advancement is closing that gap surprisingly rapidly. Chip vendors like AMD, Intel, Qualcomm and the current AI market leader, NVIDIA, are moving very rapidly to understand and embrace this opportunity.

What is quickly becoming evident is that just focusing on the hardware isn’t good enough. These vendors must also develop AI software competence which was demonstrated early on with NVIDIA’s massive two-decades-long effort to bring AI to the market at scale. Intel is significantly increasing its own software capabilities, and AMD, not wanting to be left behind, has been ramping up, as well.  

AMD’s recent move to acquire is a case in point and should significantly and very rapidly advance AMD’s AI capabilities.

Let me explain. 

NPUs and GPUs + Software

AMD is one of several vendors that will have a complete hardware set for AI shortly, including low-performance, highly efficient NPUs (Neural Processing Units) and high-performance, less efficient GPU solutions. Used in concert, they should provide the breadth of hardware needed to both run high-performance AI workloads as well as persistent low-performance edge AI workloads where energy efficiency is more important. 

To do this effectively, they will need to both understand the AI engines intimately and better understand the AI software that will run on it, otherwise the solutions are likely to underperform hardware vendors who have developed a stronger AI software proficiency and are better able to tune their products to meet related demand.  

If AMD doesn’t understand deeply the AI solutions and can’t provide the necessary performance advantages, the AI market will pass it by. But AMD isn’t a company that intends to let that happen, so it acquired the AI software-enabling company,  

This fits with AMD’s award-winning CEO Lisa Su’s IBM training. One of IBM’s historic advantages is co-developing hardware and software together. It’s still arguably the market leader in mature tested AI technology with its watson.x solution. creates enabling software for future AI deployments. In other words, it makes AI run better, but doesn’t generally compete with AI vendors like OpenAI, allowing it to move between AI providers to create solutions that significantly enhance the performance of third-party AIs in a way that shouldn’t upset the AI providers.  

Tying to AMD makes a lot of competitive sense. If an AI can be made to run better on a vendor’s hardware, those that buy that AI are likely to favor the hardware it runs best on, and will assure that AI software runs best on AMD.

This should provide a strong competitive advantage to AMD once the company is acquired and merged because not only will AMD have unparalleled access to’s tools, but the two companies will then work harder together to create a unique and powerful solution.

In addition, has been favoring AMD for some time in its solutions, suggesting that this greater integration resulting from the merger already exists due to seeming to favor AMD, providing a far faster path to market with a related AMD/ blended solution.  

In short, the two companies already seem to be working together well, suggesting the benefits to AMD for this merger should emerge unusually rapidly. 

Wrapping Up:

Vendors like AMD are rapidly adjusting to the AI opportunity, but those with enhancing software will likely have the strongest market advantage. Anticipating this, AMD has moved to acquire, an AI enabling software company that has the potential to give AMD a significant advantage in creating and providing advanced AI solutions. 

This acquisition again shows that AMD is here to play and plans to aggressively go after and seize this new AI opportunity and make it theirs. 

Source: TG Daily – AMD Moves to Acquire to Build Out AMD’s AI Solution

Poly Brings Hardware into the Future at Zoomtopia

When setting up for hybrid work, you first select a platform for collaboration and conferencing and then choose the hardware that will be used both in the conference room and by the employees who are attending remotely. At Zoomtopia this week, Poly showcased an impressive lineup of hardware for both those in the conference room and for those attending remotely.  

Let’s talk about Poly and Zoomtopia this week.


Zoom and Microsoft Teams are the most common platforms I see in use in the market. I’ve long thought that with Cisco’s telephony background, all it needs to do to dominate this space is integrate WebEx with its telephone solutions. This would make moving from an audio to a video call seamless. But the company doesn’t interoperate well even with its own solutions, which has largely eliminated this advantage and significantly reduced Cisco’s total available market. 

Zoom is popular because it is relatively inexpensive, very easy to operate and interoperates with a lot of different hardware. It reminds me a bit of how Microsoft became so powerful by leveraging partners and interoperability. It is somewhat ironic that Zoom appears to be doing an even better job at this than Microsoft, which allows Zoom to compete with these two much larger and more powerful companies effectively.  

AI will certainly shake up this segment. Since Microsoft is the most aggressive with AI technology now, it could change this dynamic, but for now, Zoom stands out, much like Microsoft once did, by being willing to work with anyone.  


A case in point is Poly which announced several compelling new products at Zoomtopia. Leading the charge was the Poly Studio Bundle with smart E70 cameras, the TC10 controller, and an integrated HP Mini Conference Room PC. This relatively easy-to-use solution sets up larger conference rooms with full camera coverage and automated camera switching.  This is an effective solution when coupled with Poly sound bars for small and medium rooms.  For large rooms, you might consider a far more powerful soundbar solution from Poly partner Nureva.  I had a chance to try out the Nureva large-room solution, and it is impressive.  

For those calling into a meeting from home, in transit, from a cubicle or in a conference room with a lot of ambient noise, the Poly Voyager Surround 85 Unified Communications Bluetooth headset stands out. With cradle charging and wireless support, this is an impressive noise cancelling headset. With 21 hours of talk time, it’ll even work on an overseas air trip without needing to charge it. And it doesn’t hurt that the headset is nice looking.  

In concert with HP, Poly announced two desktop webcams, the 430 and the 435. These are relatively inexpensive HD (not 4K) headsets that look good on top of your monitor and can travel with you on the road if you don’t like your PC camera.  Both have dual built-in microphones for sound and come with long USB cords.  

Finally Poly and HP announced two keyboards and two mice. A lot of PCs purchased over the pandemic will now likely need a new keyboard and mouse due to the abuse these things have been getting over the last several years.

Overall, it was an impressive showing from HP, Poly and Nureva at the Zoomtopia event.

Wrapping up:

Zoom is fascinating because, as a far smaller company than either Microsoft or Cisco, it should have been competitively wiped out. Instead, it seems to be doing fine because of a tighter focus on its customers’ needs coupled with decent execution. Zoomtopia was a showcase of that execution. Partners like HP, Poly and Nureva highlight this advantage through strong interoperation and ongoing impressive improvements on usability.

We don’t really know where this remote work thing is going to end up, but as long as companies like Zoom, Poly, HP and Nureva keep stepping up with impressive hybrid work conferencing systems when it comes to meetings and collaboration, we know the end will be productive. 

Source: TG Daily – Poly Brings Hardware into the Future at Zoomtopia

The Microsoft Surface Studio 2:  Creating the Perfect Creator PC

Microsoft announced Surface Studio 2, and I think it represents the best effort so far to create the perfect creator’s laptop. It isn’t a cheap date at around $2K for a base configuration that likely won’t meet the needs of many creators, but optioned-up with a professional NVIDIA graphics card, it’s a viable workstation-class PC, even though it lacks the certifications typically required of workstations.  

But I’m talking hardware design here. The target competitor is the Apple MacBook Pro which the Surface Studio 2 appears to overcome with advancements that Apple doesn’t seem to want to match. Apple is known for focusing more on preserving margins than building the best tools for the creators that make up much of Apple’s existing installed base.  

Don’t get me wrong. Many of my peers believe the MacBook is the best creator tool out there due to its performance and battery life, but I think there is room for improvement and Microsoft is exploring that with Surface Studio 2.  

What Does a Creator Need in a Notebook?

Most of the creators I’ve known live on either workstations or gaming rigs that are capable of workstation-performance loads but cost less than a true workstation. Over time, the loading has switched from the CPU to the GPU which means that the perfect creator product not only needs a decent GPU but the option for a professional grade GPU if the creator wants to run advanced applications for animation, editing pictures or specially designing things.  

Creators generally like some kind of pen interface. This means a touch screen, but touch screens are difficult to use with a pen unless they lie flat. In addition, creators often need to stand while sketching, which means the laptop should have a tablet form even though it will be used as a more typical laptop for the paperwork that generally comes with most jobs (email, status updates, directions, descriptions etc.).  

Most recently, creators need access to AIs, and AIs run best on hardware designed for them. The concept of a NPU (Neural Processing Unit) has been brought forward for that purpose, and this not only helps create AI-driven products and offerings but allows the user access to local generative AI instances so they can speed up their workflow and increase their overall productivity.  

MacBook Pro vs. Surface Studio 2

Both products start at around $2K, and both products can be configured with GPUs though the Surface Studio 2 can use one of three GPUs, including a professional grade. The Apple MacBook Pro comes in 14-inch and 16-inch versions, while the Surface Studio 2 only has a 14-inch+ display. Both products have unique displays tied to their platform: Pixel Sense for Studio 2 and the higher resolution retinal display from Apple.  And both products weigh in around the 4- to 5-pound range. Both products should have similar battery life. Microsoft’s listed battery life is higher, but Apple has traditionally been more accurate in its representations, so they should be very similar to each other in this regard.  

But the Surface Studio 2 has a NPU for generative AI creation and potentially improving the productivity of the user. Microsoft is rolling out its Copilot generative AI feature across Windows and Office 365 next quarter. The Surface Studio 2 has a cantilevered touch screen that allows the user to sketch out their ideas and create finished artwork on the screen, and it has the NPU that Apple’s solution lacks, giving a bit more protection against premature obsolescence than the MacBook Pro has.  

Wrapping Up: The Surface Studio 2 Is Better for Creators

This is more about the case design and the inclusion of the NPU than any other factors where the MacBook pro equaled or even exceeded what the Studio 2 offers. But creators often need to be able to draw things, work flat on a table and to present their work. The cantilevered touch screen is far better for this than Apple’s traditional laptop screen, and the NPU in the Surface Studio 2, once the AI technology is available, will allow the Studio 2 to cut down on the amount of work a creator has to do significantly.   

I think the overall difference between the products is that Microsoft has targeted creators very tightly with its Studio 2 while the MacBook Pro is more similar to a high performance generic laptop. It performs admirably as a laptop, but it doesn’t embrace what creators need as well as a targeted product does. 

Sadly, Microsoft continues to under-market Surface, which means Apple’s MacBook Pro will likely outperform it in the market. But if you are a creator, you might consider the Studio 2 as a better tool for what you do than more generic laptops, including the MacBook Pro.  

Source: TG Daily – The Microsoft Surface Studio 2:  Creating the Perfect Creator PC

How to Fix an iPhone Stuck on the Apple Logo | 2023

Few things can be as frustrating as seeing your iPhone frozen on the iconic Apple logo screen. This dreaded sight often leaves users feeling helpless. They may question whether their device has come to the end of its usable life, but there’s no need to worry.

In this scenario, a few straightforward steps can address this common concern. This article will explore the various reasons why an iPhone is stuck on the Apple logo. It will lead you through the necessary steps to revive your device.

Part 1: What are the Primary Reasons for the iPhone Stuck on the Apple Logo

Before we dive into the solutions, it’s essential to understand why your iPhone might be stuck on the Apple logo in the first place. Several underlying factors can lead to this frustrating issue. These range from software issues to iOS update problems. Here are some of the most common reasons why the iPhone won’t turn on stuck on the Apple Logo issue:

  • One of the most common triggers for this problem is a failed or interrupted software update or restore process. 
  • Jailbreaking can also make your iPhone more susceptible to issues like being stuck on the Apple logo.
  • Sometimes, hardware issues like a malfunctioning battery, faulty display, or a damaged motherboard can lead to the Apple logo freezing.
  • A drained or damaged battery might prevent your iPhone from completing its boot process, leaving it stuck at the Apple logo.

Part 2: 5 Ways to Fix iPhone Frozen on the Apple Logo

Now that we’ve identified the common reasons behind your iPhone being stuck on the Apple logo, let’s delve into the solutions. Depending on the root cause, below are several methods you can try to get your iPhone back to normal:

Fix 1: Recharge Your iPhone Battery

If your iPhone is stuck on the Apple logo and you haven’t used it for a while, it’s possible that the battery has completely drained. In such cases, the iPhone may not have enough power to boot up properly. By recharging your iPhone battery, you may resolve the issue of it being stuck on the Apple logo.

Fix 2: Force Restart Your iOS Device

Performing a force restart can often help in resolving the issue. A force restart is a powerful method that can clear temporary glitches and jumpstart your device. Discover the steps to initiate a force restart on various iPhone models: 

  • For iPhone 8 and Later: Initiate the process by first tapping the “Volume Up” button. Then tap the “Volume Down” button and follow it by pressing and holding the “Side” button. Let go of the button once the Apple logo comes into view.
  • For iPhone 7 and 7 Plus: Simultaneously, press and keep both the “Volume Down” and “Sleep/Wake” buttons held down until you see the Apple logo.
  • For iPhone 6S and Earlier: Begin by tapping and holding the Home and Sleep/Wake buttons. Continue pressing both buttons until the Apple logo materializes on the display.

Fix 3: Update or Restore via iTunes

In case a force restart doesn’t work, you may need to update or restore your iPhone using iTunes. This method can cause data loss, so ensure you have a backup iPhone in the past few days. Connect your iPhone to a computer, open iTunes, and follow these steps to fix the iPhone frozen on the Apple logo :

Step 1: Select “Update” if iTunes prompts you to after detecting your device. This process will attempt to reinstall the iOS without erasing your data.

Step 2: If updating doesn’t work, choose “Restore.” This will erase all data on your iPhone, so make sure you have a backup.

Fix 4: Utilize Wondershare Dr.Fone – System Repair (iOS)

In case your iPhone remains stuck on the Apple logo despite trying the previous fixes, you can turn to third-party software like Wondershare Dr.Fone – System Repair (iOS). This tool is designed to help users recover iOS devices from various issues, including the iPhone being stuck on the Apple logo. The best part about the tool is that you don’t lose your data.

It fixes over 150 iOS system issues and more than 200 iTunes errors. This tool also offers upgrading/downgrading of iOS without data loss. Given below are the detailed steps to resolve the iPhone stuck on the Apple logo issue:

Step 1: Select the Required Options from the Toolbox

Initiate the repairing process of your iOS device by launching Wondershare Dr.Fone on your computer. Head towards the “Toolbox” from the main interface to access further options from where you need to tap “System Repair.” Once you tap on it, a new window will appear, where you have to proceed with “iPhone,” which you are trying to repair.

Step 2: Download the Suitable Firmware

Here, make sure that your device is connected through a data cable. After this, click on the “iOS Repair” option from the new window, and hit the “Standard Repair” option from the further appearing window. Put your device on Recovery Mode so the platform can detect iOS firmware for the connected device. Once the firmware is detected, click the “Download” button in front of it to download it on your device.

Step 3: Press the Repair Now Option 

The platform will verify the downloaded firmware before making it available for fixing iOS. Moving ahead, hit the “Repair Now” button to begin repairing your iOS device. See the progress bar on the next screen to confirm the completion of the process. Then, tap the “Done” option to end the process and make your device available.

Fix 5: Check for Hardware Issues

If none of the above solutions work, consider the possibility of a hardware problem. Inspect your iPhone for physical damage, and if you suspect a hardware issue, it’s best to contact Apple Support or visit an Apple Store for professional assistance.


Dealing with an iPhone stuck on the Apple logo can be an incredibly frustrating experience. Yet, with the right approach and tools, it’s a challenge that can be overcome. We’ve explored several methods in this article. These ranged from the simple force restart to the advanced capabilities of software like Wondershare Dr.Fone – System Repair.

While force restarting and iTunes-based solutions can often do the trick, more stubborn issues may need specialized tools like Dr.Fone. The ability to diagnose and resolve complex iOS problems, including the dreaded Apple logo freeze, is a valuable asset.

Source: TG Daily – How to Fix an iPhone Stuck on the Apple Logo | 2023

Panos Panay Leaves Microsoft, Putting Surface at Increased Risk

This week Microsoft announced that Panos Panay, Microsoft’s Chief Product officer, will be leaving the company. Rumors are going around that Panay is going to Amazon to lead its hardware effort (which needs help). Panos Panay was the primary mover for Microsoft Surface, its biggest cheerleader, and the equivalent of Microsoft’s Steve Jobs. But Surface was backed by Steve Ballmer, Microsoft’s prior CEO, and Satya Nadella is the father of Azure and far more wedded to that solution set than either Surface or Windows, both of which get a fraction of the support they once got under Ballmer or Gates. 

Let’s talk about Panos Panay’s departure this week and what it means for Surface and Windows.


The Surface line of products was created at a time when Microsoft was still focused on competing with Apple, and it was designed to provide an Apple-like experience on Windows. I believe that, had Microsoft made the same commitment to Surface it made to Xbox, it would have given Apple a run for the money. However, having been burned on smartphones once and with all eyes on the financial performance of Microsoft, even under Ballmer this effort was underfunded, and it has languished even more under Nadella who never seemed to give the platform much support.  

While the hardware was generally rated highly, the initial focus was to create a better PC alternative than the iPad, and Microsoft did that for the most part. But in the years following the launch, the interest in tablets as PCs dropped off a cliff. Apple stopped marketing that eventuality because it wanted Apple customers to buy both a tablet and a PC, not merge the classes.  

Over time, it has become clear that Surface just does not fill a need for Microsoft anymore. It lacks a hard connection to Azure; it upsets the other PC OEMs (who do not want to compete with Microsoft); and it has never become anything but a minor annoyance to Apple. 

Panay’s departure means that Surface loses its most powerful supporter and, given the platform has not been strategic for nearly a decade, I expect Surface will be discontinued, sold or spun out eventually.  

Yusuf Mehdi

The expectation is that Panay will be replaced by Yusuf Mehdi. Mehdi is old school Windows and may end up doing a better job with Windows than Panay did because of that history and focus. Windows has also been underfunded and undermarketed of late but not to the extreme that Surface has, and Windows should still be strategic as the primary client for Azure. However, for the last several years, Microsoft hasn’t really focused on Windows, so it remains at risk of becoming the next Internet Explorer due to that lack of focus and funding for the platform.  

I expect Mehdi will work to reverse that trend and improve customer satisfaction with Windows once he takes over. Since he comes out of marketing, he will be motivated to reverse the trend that reduced marketing support over time to better assure the success of that platform. I have known Mehdi to be talented and focused, so his move to Windows leadership should be positive for that platform. Just how positive he can be will depend on his success in restoring Windows support to a more strategic level than it now is.  

Panay at Amazon

Amazon’s hardware business has been a hot mess for some time. There has been no real consistency in product design or execution on products that otherwise were well built but underperforming, given their potential value, in the market. This should be an ideal role for Panos Panay, as he is more a hardware executive at heart, and his influence should improve the consistency between hardware products, as well as up the innovation so the products are more interesting. He has an eye for design, which suggests they will be more attractive and financially successful as well. 

Where Panay was somewhat wasted at Microsoft, he is likely to find Amazon a far more supportive environment for his hardware creativity, so I can hardly wait to see what his new team comes up with there. He has studied Apple closely and should be able to deliver an Apple-like experience with Apple hardware on a budget. This could make things interesting in the consumer electronics market. 

Wrapping Up:

Since Microsoft is a software company with little interest in hardware, Surface was always kind of an odd duck there and so was Panos Panay, the father of Surface. With Panay moving to Amazon’s hardware group, Microsoft can better focus on Windows under Yusuf Mehdi who is more tightly tied historically to Windows than Panay was, and Panay is well positioned to help Amazon turn its hardware into a more powerful solution. This move makes both Microsoft and Amazon potentially stronger, but it reduces the support for Surface inside Microsoft, suggesting it will likely be discontinued, which, given it is not strategic anymore, would remove a distraction that Microsoft does not need and the PC OEMs do not want.  

Source: TG Daily – Panos Panay Leaves Microsoft, Putting Surface at Increased Risk

Uncovering Hidden Efficiency: How Application Rationalization Boosts Productivity

Digital-first business leaders are constantly looking for the next ‘big thing’ in tech to put them ahead of the curve. Back in the noughties, it was operation-improving software. In the 2010s, it moved to cloud-based Software-as-a-Service (SaaS). Cut to the 2020s, it’s all about optimizing your organization’s SaaS portfolio.

Popular SaaS products from Microsoft, Google, Salesforce and more can assist with every business function from marketing to HR. So, it’s no wonder that we’re adding them to our IT portfolios in droves. However, the proverb goes that you can indeed have too much of a good thing — and as companies have accumulated many subscriptions over time, they have unwittingly created inefficiencies that hinder their progress.

So, what can be done to streamline your SaaS stack and restore the benefits of a well-oiled portfolio? Enter application rationalization.

What is application rationalization?

Software application rationalization is the process of assessing your organization’s SaaS portfolio to identify which subscriptions should be retained and which should be consolidated or terminated. The rationale behind the process is to justify the continued use of each tool within the company, by determining which provide business value and which do not.

This typically involves assessing the utilization, performance, and costs associated with each tool in relation to the wider SaaS stack.

Once audited, the applications that are deemed less valuable to business growth can have their number of licenses rightsized or be canceled altogether. This way, you’ll be left with a lean, productive suite of applications that provide an improved return on investment.

How can it assist your business?

Many business leaders are aware that application rationalization can help them to identify and cut unnecessary costs, However, the benefit that is often overlooked is that rationalization can also improve business productivity.

With a streamlined portfolio, companies can enjoy the following benefits for their operations:

1. Fewer workflow inefficiencies

One key benefit of application rationalization is the elimination of workflow inefficiencies that stem from redundant tools. As businesses scale, it’s natural to accumulate subscriptions that overlap in functionality or even duplicate subscriptions to the same product. These incur unnecessary extra costs in terms of both time and resources, with fragmentation in workflows, delays in passing through approvals, and file management confusion.

However, application rationalization can identify these redundancies and support organizations in making informed decisions. It also helps to identify any instances of shadow IT, the use of unauthorized systems for work purposes, which 80% of employees admit to.

As a result, the company can identify the most suitable, valuable tools to prioritize, and terminate subscriptions to others. For example, a business might be using three separate project management tools, each with its own feature set and user base across the organization. Once rationalized, the company can migrate all projects to one ecosystem, streamline project management processes, and reduce confusion.

2. Higher employee engagement

Though they’re built to make life easier, an over-provisioning of software tools can actually hinder your employees — and provide a source of frustration as they try to complete their work. According to project management specialists at Asana, context switching between different apps, tasks and projects is a distraction that stresses out workers and makes them less productive as a result.

But this isn’t news to your staff. One report from Personio asked employees how they feel about their software use in the workplace and found that 37% believe they have too many apps to manage, while 36% claim it disrupts their workflow. Instead of entering deep work, many employees spend valuable time out of their day grappling with unfamiliar tools and tedious software administration.

Application rationalization helps to optimize software use and boost employee morale. When organization trim their portfolio to a leaner suite of tools, staff can enjoy a more fruitful experience at work. This is because they will become proficient in a smaller set of tools, increasing both their productivity and job satisfaction.

3. Streamlined management and procurement processes

As we’ve discussed, application rationalization offers a number of immediate gains for business productivity. However, the process also lays the groundwork for future SaaS management. By maintaining a leaner software portfolio, organizations reduce the complexity of managing licenses, updates, and renewals — resulting in lower administrative overhead and more efficient resource allocation.

This also supports decision making when it comes time to purchase new tools. According to Statista, the average organization now deploys 130 different applications, which has increased consistently since 2015. At the current rate of growth, this is unsustainable for many businesses. However, application rationalization helps to identify gaps in your current software ecosystem so that you can make select and strategic new acquisitions rather than add to your stack indiscriminately.

This results in a cost-effective and agile procurement process which is less likely to fall foul of the pitfalls of portfolio bloating.

So, with a leaner stack of existing tools, you save time and resources in daily operations, SaaS management, and new procurement, all the while improving employee engagement. Application rationalization is no quick process, but offers one of the greatest returns on investment in SaaS — productivity benefits that will last for years to come.

Written by Lara Harper

Source: TG Daily – Uncovering Hidden Efficiency: How Application Rationalization Boosts Productivity

The Death of the First Microsoft Surface Duo and Anticipating the Death of Surface 

In terms of valuation, Microsoft is the only company that comes close to Apple’s incredibly high valuation. The difference isn’t a breadth of products so much as the fact that Apple sells more products that people relate to than Microsoft does. In the same window that Apple launched its new and impressive iPhone 15 and iPhone 15 Pro, Microsoft just pulled support for its first iPhone competitor in nearly a decade, the first Microsoft Duo. But the announcement created the impression that Microsoft was abandoning the platform which will be problematic for Duo’s future if not the future of Surface because it speaks to commitment, and neither Apple nor Microsoft tend to stick with products that don’t perform well in the market. 

Ironically, I think, especially the iPhone 15 Pro and its related launch effort showcase what Microsoft needs to do to truly compete with Apple with Surface, but Microsoft’s unwillingness to step up suggests that both Surface in general and the Duo line in particular aren’t long for this world. My personal view is that companies should either do things well or not at all. Otherwise, they’re just wasting money.  

Let’s talk about why I think Surface is not long for this world.  

Competing with Apple

I’ve done a lot of work over the years talking to various PC OEMs and Microsoft about how to compete with Apple and, for all of that, little real progress has been made. Apple wins because it provides a better overall experience and does a better job of assuring that experience after the sale. Apple isn’t perfect. It tends to focus excessively on revenue and profit, which leads it to make decisions that aren’t in the best interest of customers and, particularly suppliers and application vendors who Apple tends to bleed to death financially. It is also less than transparent about problems and has been accused of intentionally damaging user phones. That can damage trust, but Apple’s very good at crisis management, which limits the related image and brand damage significantly. 

The elements of Apple’s success are a simple product line of relatively high quality, a dedicated store and in-person support in that store, and a demand generation budget that leads the industry. It works because that formula made Apple the most valuable company in the world. Microsoft isn’t a slacker. It’s the second most valuable company in the world, and for the last several years, the two companies have traded places a lot. In 2000, Microsoft was number one and Apple didn’t even make the top ten, which suggests Apple is advancing far more quickly in terms of valuation than Microsoft, even though Microsoft arguably has a broader product set, is much stronger in the cloud and in the enterprise, and has a significant lead in generative AI, the tech that is driving the market.  

If you wanted a product that would challenge Apple, Apple’s new iPhone 15 Pro showcases what would be needed.  Now, what happens when a company tries to create a competitive effort to displace Apple is that it picks and chooses from the list of what Apple provides, and much like a Triathlon athlete that decides they don’t want to swim, the result doesn’t perform well. If you aren’t willing to complete the solution, which shortly will include both the Apple Watch Ultra and the Apple Vision Pro, then don’t bother. It won’t work.  

Xbox vs. Zune vs. Surface

Xbox is an example of a product that was executed well. Because Microsoft was convinced that Sony was going to go after Windows, it fully funded the Xbox effort. It took massive losses in the early years, but the result was a product that could and does compete very well with Sony. Had Microsoft continued that level of support and promotion, I believe it would be the only console gaming platform in the market at scale. 

With Zune, Microsoft tried to compete with Apple on a budget. It created a compelling argument with early subscription music, the ability to play videos, and a much more robust design. But it didn’t counter Steve Jobs saying that music subscriptions were stupid, didn’t provide any video content, didn’t create an Apple music migration utility (making moving from the iPod to the Zune easy), didn’t market the robust case, and, after it improved the product, Microsoft defunded marketing, so Zune failed.  

When it comes to Surface, Surface PCs have Apple-level quality and some unique capabilities like PC and tablet functionality that are unmatched by Apple.  But they don’t match Apple’s marketing, so their unique advantages aren’t widely known. The Microsoft store isn’t a Surface store, showcasing a unique Microsoft problem in that Surface arguably competes with Microsoft’s OEM partners, weakening partner support. This forced the stores to also carry products from PC OEMs which weakened the Surface component and made support problematic (OEMs like to support their own products). The stores should be more closely modeled after Apple stores and be Surface-only, but that would have upset the OEMs more and created additional unintended negative consequential damages.   

Microsoft’s two obvious paths to success are to eliminate Surface and focus on making the OEMs more successful, or spinning Surface out with funding so it can fully compete with Apple. Microsoft has yet to do either, but the cheapest path is to eliminate the platform and, given how it just pulled the plug on the original Duo, that is the path Microsoft seems to be heading for.  

Apple has several weaknesses that could be exploited, but you must be willing to call those weaknesses out while matching Apple in the rest of its solutions.  No one seems willing to do that, so what is the point of competing with Apple if you aren’t willing to step up to the competition? 

Wrapping Up:  

Apple’s latest launch was one of the most impressive I’ve ever seen. With its Windows 95 launch and initial Xbox effort, Microsoft showcased that it can perform at Apple’s level but has subsequently chosen not to do so. Microsoft just doesn’t seem to really care anymore. Even Windows is at risk due to a very clear drop-off in support, which reminds me a lot of what happened to Internet Explorer over a decade ago.  

It annoys me to no end to watch companies intentionally underperform and refuse to learn from past mistakes or past successes. Microsoft is a strong company with impressive capabilities, but it just doesn’t seem to have the will to compete with devices anymore, so why keep them? If you don’t want to play the game well, why play it? My hope is that generative AI will eventually keep companies from repetitive mistakes. My fear is that even AI won’t be able to overcome executives who fail to learn from past mistakes.  

Source: TG Daily – The Death of the First Microsoft Surface Duo and Anticipating the Death of Surface 

Windows 11 Adoption Rates and the Lesson Microsoft Refuses to Learn

In terms of demand, it amazes me that Microsoft had the most successful launch of an operating system in history with Windows 95 and nearly the worst product roll-out of that same product, yet internal politics resulted in Microsoft believing the successful part of the launch was the problem to be fixed. It would be like watching a football team where the quarterback threw perfect passes that the receivers couldn’t catch, and then seeing the coach pull the quarterback because he was making the receivers look bad.  

This goes to the heart of the problem with Windows 11: there is virtually no demand generation going on for it, so it’s more surprising that Windows 11 has done so well than that it has done so poorly.  

Let’s explore why Windows 11 is lagging in adoption and the kind of problem that it will create in 2025 if Microsoft executes to plan and pulls support that year.  

Why getting the market to move is important

With every generation of Windows, Microsoft has significantly improved the security of the platform. Historically, the older platforms increasingly become viable targets for hackers because the top tier employees who supported the old platform move to working on the new one.  

In addition, supporting multiple versions of an OS is a problem for developers who then must support the multiple versions and can’t use new features in the new OS aggressively because they still must support the old OS. Given users typically get a free upgrade to the new OS at or around launch, if your hardware supports it, you’d think adoption rates would be far faster over time. Instead, they seem to be slowing down despite the fact that OS upgrades, compared to what they were back in Windows 95 days, are virtually painless.  

From Microsoft’s perspective, a fast upgrade cycle reduces its support costs, reduces its exposure to bad press that results from the old OS being breached, and tends to result in higher customer satisfaction (except when they screw the release up like they did with Vista and Windows 8). Windows 11 is a nice improvement over Windows 10 which was a massive improvement over Windows 8, so why the low adoption rates?  

Microsoft’s demand problem

Despite the fact Microsoft has employed some of the best marketing people I’ve ever met (I’m an ex-marketing director myself), it is like too many tech companies and doesn’t institutionally get marketing. Too often, the marketing experts aren’t given adequate resources, aren’t making key marketing decisions, and the company seems to think that successful marketing is a bad thing going back to that Windows 95 experience.  

And Windows 95 was an amazing launch. There were lines around the block to buy the OS. The market was so hot, Microsoft had trouble keeping up with demand, but it failed to adequately assure the product’s quality or to properly staff support to handle what became a nightmare support load.  

Many of the problems that Windows 95 enjoyed have gone away. For instance, back then drivers were a mess, and Microsoft’s release process allowed for last minute changes after a beta had been approved, both of which resulted in lots of breakage. There was also a lot of older hardware or low volume hardware in the market that wouldn’t run Windows 95 that only got identified when the product was installed. It was a perfect storm of massive demand, poor quality experience and massively underpowered support.  

Over the years, Microsoft has largely fixed the driver and hardware diversity problem. Now they force you to make sure the new OS will work on your hardware before you install it, and, with AI, it has the ability to scale support like never before. But because of that early failure, Microsoft still thinks demand is a bad thing and lament the lack of it, while doing little to fix the problem.  

Fortunately, Microsoft doesn’t have a competitor problem. Unfortunately, older versions of Windows fill that niche and, unlike competition, aren’t forcing Microsoft to focus on customers but reducing costs which doesn’t bode well for the future of this platform. It’s more likely to result eventually in an emerging competitor, most likely a smartphone variant that eventually displaces PCs and makes both Windows and personal computers obsolete.  

Wrapping up

As demand for Windows 11 declines and Microsoft increasingly becomes the Azure company, the ability to support OEMs or even its own PC platform, Surface, declines as well. In Surface’s case, this may be a good thing because, given Microsoft’s licensing model, Surface likely had done more damage to the overall PC market by reducing Microsoft’s support for OEM PCs over time while not fully embracing the Apple model (which included dedicated stores and Genius Bars) that it attempted, but failed, to emulate. You could argue that Surface has been dead for some time, but no one told Microsoft.  

Windows 12 is expected next year and, much like it was with Windows 10, jumping a release from Windows 10 to 12 will likely require new hardware or be painful. Microsoft should have begun building interest in Windows 12 by now, but it’s still struggling to build demand for Windows 11.  

Microsoft’s best defense against an eventual viable smartphone-based PC alternative is to have everyone on its current platform happy. On the current path, 2025 will likely provide the best opportunity for anyone wanting to displace PCs and Windows as a desktop OS. The current PC sales numbers are ugly, and OEMs are looking for a way out. Unless Microsoft steps up, that way out may be with a different vendor entirely. If the right vendor steps up, they could take out Windows and Azure with one better executed hybrid solution.  

This won’t happen overnight, but remember how quickly the iPhone took out even the most powerful vendor prior to that product’s success? Things can happen very quickly in this market, and while Microsoft treats Windows as a legacy product by under-resourcing it, it may soon discover that it is critical to Azure’s success. Losing one will likely do massive damage to the other.  

Microsoft needs to fix its demand problem before that problem fixes Microsoft.  

Source: TG Daily – Windows 11 Adoption Rates and the Lesson Microsoft Refuses to Learn