IBM Levels Skills/Sales Support for Ecosystem Partners

ibm_logo

The relationships between IT vendors and their strategic partners are not always transparent. Certainly, there are obvious differences between the focus areas and skill sets of partners, such as value-added resellers (VARs), ISVs, system integrators, and specialists/consultants. But, traditionally, the relationships between vendors and partners also follow a hierarchy of sorts with those partners providing vendors with the most direct monetary value (like resellers) receiving greater attention and resources than others. 

Shifting that dynamic qualifies as news, so the recent blog from Kate Woolley, GM of IBM Ecosystem, explaining how the company is revamping its training, certification, and sales support offerings for partners is worth considering.

IBM’s partner investments

So what exactly is IBM Ecosystem doing? Woolley noted that as part of an ongoing effort to simplify partners’ experience and provide improved and expanded tools and resources, IBM is:

  • Extending to all registered PartnerWorld members, at no charge, access to the same trainings, badges, and enablement materials that IBM’s sellers enjoy. Partners will have access to skilling resources that align with product offerings to deliver hybrid cloud and AI solutions to clients, within data and AI, automation, security, sustainability, infrastructure, and other areas.
  • Helping partners qualify for sales and technical badges that demonstrate industry-recognized expertise, including the ability to position and differentiate IBM solutions. The badges are shareable on professional social platforms, such as LinkedIn, and travel with the learner. 
  • Enabling partners to access additional prospecting materials, such as sales videos, seller presentations, client presentations, and content, including white papers, analyst reports, decks, and solutions briefs.
  • Launching a new learning hub designed to dramatically improve the digital experience for partners. The hub offers a modernized and consistent experience, making it easier to find resources, the right training at the right time, and complete the coursework desired. The new learning hub can be found at: https://www.ibm.com/training/bplearn/catalog 

Woolley also highlighted that over the past year, IBM has doubled brand-specialized partner sellers in the IBM Ecosystem, increased technical partner sellers by more than 35%, and bolstered the digital experience with its Partner Portal. As a result, deal registration has improved, and the company has introduced partners to more than 7,000 potential deals valued at over $500 million globally. Woolley also committed to IBM’s plans to continue investing in partners’ experience with the goal of doubling revenue through the IBM Ecosystem in the next three to five years. 

Final analysis

What are the takeaways here? A couple of things are worth noting. First, IBM’s decision to allow all registered partners to freely access training and materials that emphasize the quality and value of the company’s solutions qualifies as a great example of a vendor making itself easier for partners to support. That might sound like a no-brainer, but it is particularly important for a company actively working on expanding partner-driven sales. It also sets IBM apart from IT vendors that treat strategic partners hierarchically. 

It is also worth noting how closely aligned the materials and skilling assets are with key IBM strategic imperatives and targets, including hybrid cloud and AI solutions, automation, security, sustainability, and infrastructure. Note that “infrastructure” (aka hardware) is at the end of this list—suggesting the emphasis the company is placing on evolving partners’ skills beyond simply driving sales of servers and storage arrays. 

For over a decade, IBM has been transforming itself from a traditional enterprise systems player into a vendor that delivers reliably powerful, innovative, and effective solutions to a broad range of global businesses and industries. To achieve that goal, the company needs to help ensure the participation and success of companies with whom it collaborates. 

Kate Woolley’s blog highlights the value of providing all strategic partners access to enhanced training and sales collateral. She also underscores IBM’s commitment to its own transformation and to bringing strategic partners along for what should be a memorable and profitable ride.  

Written by Charles King, Pund-IT®



Source: TG Daily – IBM Levels Skills/Sales Support for Ecosystem Partners

Money Saving Expert Bridging Loans: What’s the Verdict?

ukpf-logo

For some time, Money Saving Expert has been the UK’s go-to for honest and impartial advice on most essential financial topics. Along with the useful resources published by the MSE team itself, the website also has a forum with more than two million active members.

But what is the official (or unofficial) Money Saving Expert take on bridging loans? Does the Money Saving Expert website offer any advice, direct or otherwise, for those looking into bridging finance?

What is Money Saving Expert?

Established in 2003, Money Saving Expert was created by Martin Lewis – a financial journalist who would eventually make his own fortune helping other people save money. He’s now worth tens of millions of pounds, but can still be found regularly advising folks on how to save money on their household bills, and other outgoings in general.

Money Saving Expert is staffed by more than 100 people, who create and curate a steady stream of consumer-finance related posts. Martin Lewis himself has become known as something of a guru on the financial advice scene and is constantly popping up in high-profile slots on popular TV and radio shows.

He sold Money Saving Expert to the Money Supermarket Group several years ago for a massive sum of money but still contributes to its output. 

What Does Money Saving Expert Say About Bridging Loans?

Interestingly, bridging finance is not a topic that has ever been brought up on Money Saving Expert in any official capacity.  In fact, there has barely been any mention of bridging finance from the Money Saving Expert team itself, over the years.

The same can also be said for Martin Lewis, who so far has remained fairly tight-lipped regarding his thoughts on bridging finance; perhaps due to the fact that bridging loans have only recently started to become mainstream in nature, having previously been near-exclusively available to established business borrowers.

How About the Money Saving Expert Forum?

It is a slightly different story where the Money Saving Expert forum is concerned, where (literally) thousands of conversations have taken place to date about bridging finance.

All of this makes it surprising that little to no information and guidance has been published by Money Saving Expert in the meantime.

Taking a look at the Money Saving Expert forum sheds light on some of the most common questions and concerns regarding bridging finance. For example, some of the most frequently asked questions of all by members of the Money Saving Expert forum include the following (and variations thereof):

  • Will I qualify for bridging finance?
  • Who can apply for a bridging loan?
  • What does the bridging loan application process look like?
  • What security needs to be provided for a bridging loan?
  • How are bridging loans repaid?
  • Can a bridging loan be repaid on a monthly basis?
  • Is it possible to get bridging loans with bad credit?

This would seem to suggest that general interest in bridging loans among the consumer audience has been piqued for some time. But given the lack of formal input from bridging loan experts, advice sourced from forums like these is not always reliable.

Always Ask the Experts…

Of course, it is not to say that forums like these cannot be useful for accessing invaluable information. Particularly when it comes to people’s first-hand experiences with bridging finance, there’s much to learn in public forums.

But if you have important questions or concerns regarding bridging finance, they should always be discussed with an expert. Book your obligation-free consultation with the team at Bridgingloans.co.uk today, and we will help you build a clear picture of the pros and cons of short-term borrowing.



Source: TG Daily – Money Saving Expert Bridging Loans: What’s the Verdict?

ThinkPad’s 30-Year Anniversary:  Innovation Squared

Laptop Lenovo Thinkpad Keyboard Technology

ThinkPad is almost the polar opposite of Apple in that ThinkPad is function over form while Apple tends to be form over function. You buy Apple for the look of the device and to feel part of the Apple cult, but you buy ThinkPad when your job depends on what you do on a laptop. While Apple is long on pretty and lightweight, it’s also famous for throttling its products along with various “gate” problems surrounding coverups, and ThinkPad has historically been the brand you can depend on to get the job done. While Apple is clearly a U.S. company, Lenovo is an international firm with leadership spread all over the world. Both products are iconic, though whereas Apple was once known for innovative ideas, that reputation has eroded over time while ThinkPad continues to define new technologies, most recently foldable screens and head-mounted displays.

Let’s look back at some of the highlights of the last 30 years.

The butterfly keyboard

One of the first notebooks I ever lusted after, though I never got one, was the little ThinkPad 701C with the big keyboard that carried the butterfly name. Granted, the power you could get in a notebook back in the early 90s was not much compared to today. Hilariously, we argued at the time that the processors of that time were overkill. 

Still, the idea of a tiny laptop that had a full-sized keyboard was compelling, and this was the product that filled a halo role and got people excited about the brand, even though most of us ended up with a more generic black ThinkPad laptop.

The T-Series

If ever there was a gold standard for a business laptop it was, and to a certain extent still is, the ThinkPad T-Series, particularly the T400s. They were affordable, not too heavy, had relatively long battery life, a very strong industrial design, and had one feature most users seemed unaware of. In fact, it was the basis of a running joke for much of the life of the T-Series: it had a unique keyboard light that users would forget when asking for new features (most asked that the laptop get a lighted keyboard, something it already had).

If the laptop with the butterfly keyboard got people’s attention, it was the T-Series that they bought, so virtually every other laptop, with the exception of Apple, copied the T-Series to some degree. It is interesting to note that when the MacBook air came out, which was thinner than the ThinkPad, Apple ran this ad showcasing that Thinnest isn’t necessarily best

First dual-screen laptop

Another innovative laptop came out in 2008 with dual screens, which was the ThinkPad 700ds.  This laptop had a second screen that would pull out and provide you with more screen real estate. It was a small screen, though bigger than you had on most Smartphones of the time, allowing you to do things like look at your initial social media feed while you were working on a project.  Screen size has always been a limiting factor for notebooks and ThinkPad, well beyond any other vendor, has been focusing on creative ways to increase effective screen size for years.  

The true table convertible

With all of the hype surrounding the iPad taking over the world, the ThinkPad group came up with an alternative concept in 2013 called the ThinkPad Helix. This product had a removable tablet that could also be reversed to do presentations, something that some sales organizations thought they needed to do individual pitches.  Still, it was very sturdy and robust and even though the tablet threat was overstated (largely because Apple stopped driving it after Steve Jobs passed), it was a far more useful alternative to the iPad than most other challengers.   

ThinkPad X1 Carbon 2014

The second-generation ThinkPad Carbon was a showcase of just what you could do with carbon fiber to create a fully functional, thin, light laptop that made the MacBook Air look stupid. One other unique feature was its adaptive function key row that used grayscale e-paper displays to provide unique function key options customizable by the user. It was certainly innovative, but users preferred the keys the way they were so this feature was replaced by a more common row of function keys in 2015. Still, it showcased a willingness to try something new and different, and if you don’t try new things, you won’t discover new opportunities. Sadly, like the butterfly keyboard, this one didn’t last but it was certainly innovative for its time  

The ThinkPad Fold

ThinkPad was aggressive with foldable displays, first coming up with one that folded down from 13” to a purse-sized form factor and then, more recently, releasing one (that I think is a far better idea) that folded up from 12” to near 17” for vastly greater potential productivity. Sporting amazing-looking OLED displays, these foldable laptops represent the cutting edge of laptop innovation today and showcase that while Apple hasn’t had a truly innovative laptop idea in years, ThinkPad is still leading the market in productivity-focused innovation.

Wrapping up:

It has been 30 amazing years for ThinkPad, and it’s a wonder that the unit—which transitioned 30 years ago from what was once the IBM PC company to Lenovo—is still out-innovating its peers with a product line focused on business users, not consumers. Throughout its 30 years of existence, ThinkPad has been known for its consistent look, extremely reliable nature, and its tight focus on customers who want to both be proud of their laptops and get real work done. ThinkPad is the brand you buy if your livelihood depends on your PC working and, I expect, it always will be. Happy 30th Anniversary ThinkPad!



Source: TG Daily – ThinkPad’s 30-Year Anniversary:  Innovation Squared

Intel’s Interesting Arc A770 Discrete Card Strategy

Intel-Arc-A770-gpu-1

Last week at Intel Innovation, Intel released its first discrete graphics card. It’s nice looking, but can it compete with AMD’s and NVIDIA’s best offerings? The answer is no.

But it would have been impossible for Intel to come up with a new card that beat the best from two companies that have been running hard against each other in that segment for decades. So, Intel came up with a different strategy: create a card that is solidly in the value space with regards to pricing but provides it with features that are often only found in premium cards. This not only potentially provides a lot of value where the volumes are typically higher, but competitors won’t want to respond for fear of devaluing their higher-end offerings.  

Now, this is a new card which means there will be growing pains. If you aren’t into first-generation technology teething problems, you’d best wait a few months so that any driver or incompatibility issues are addressed by others. But for those that want a decent level of performance in an affordable graphics card, the Intel Arc A770 is for you.

The strategy is very similar to how Japanese and Korean car makers came to market. They hit low price points with cars that had features often not found until you got to premium offerings to establish themselves in the market, then brought out ever more capable products over time. This strategy allowed Toyota to eventually challenge and take leadership from General Motors.  

Let’s talk about Intel’s Arc A770 Limited Edition discrete graphics card this week.  

Perception and reality

A good marketing program focuses on the perception of the related product because, to the buyer, perception is reality. The new Intel Arc A770 Limited Edition card leads with perception. Limited editions are typically the first cards to market at the high end of any line and are often relatively unaffordable since they go for a significant premium.

In its most expensive configuration, the Arc A770 Limited Edition card is only $349 which is a $20 increase over the base card and the real value. The difference between the two cards is 8 GB of memory. The reason to have both is to create the impression in the mind of the buyer that $20 but 8 gigs of memory you can’t upgrade is one great deal.  

Exclusivity tends to connect to buyer status, and status drives sales. This is at the core of Intel’s naming gambit. They don’t have cards in the nose-bleed pricing segments, thus the card they do have is their flagship. Typically, such a move would devalue more expensive cards, but Intel doesn’t have those which allow them to put features in this card like XeSS, Smooth Sync, Speed Sync, and Arc Control that you’d typically find in more expensive offerings.  

Performance

Due to its new technology, Intel’s Arc card performs best with the latest games that take into account the XeSS up converter. This accounts for a lot of the current AAA titles, but older titles that were created before this card was released won’t perform as well. One of the most thorough benchmarks for this new card can be found at PCMag which confirms that with current games, the card over achieves, but with old games, it underperforms its competitors. However, once those older games are updated, it is likely their support for this new card will also increase.  

The key is the XeSS support which allows the related system to render at low resolutions, then upscaling the result. But if the game doesn’t support XeSS, it will be hampered competitively, and the related performance will be lower than with other similar cards. The gameplay isn’t horrible, but clearly, this card is best for those playing current titles until the older titles are updated for this new upscaling technology.  

Wrapping up: 

For early adopters, the Intel Arc A770 Limited Edition looks to be good value, but that value will increase as more game developers embrace its unique upscaling technology. It is a pretty card in a case and will visually enhance most any system built that has a window into the case to see it.  At $349 it is one of the best desktop PC values in the market, but higher-end cards from both NVIDIA and AMD will outperform it. For many, however, that extra performance may be financially undesirable or not needed depending on the game title.

As is always my advice with brand new products, it would be wise to wait a few months before buying this card so that any problems are discovered by others.  So far, even the most rigorous testing hasn’t identified a major problem, only tuning issues, but there’s almost always breakage when a new technology like this rolls out. The real test will come when thousands of people use the card.  

Best of luck and good gaming!



Source: TG Daily – Intel’s Interesting Arc A770 Discrete Card Strategy

Russian-Speaking Hackers Target US Websites In A Series of Cyberattacks

cyber_attack

A pro-Putin hacking cell has announced plans to wreak havoc across official US websites over the next three days.

The website is believed to have been subject to a distributed denial-of-service (DDoS) attack, which seeks to disrupt a site by flooding it with web traffic in a bid to try and knock it offline.

For each of the websites, the hackers have detailed how they plan to disrupt the services.

Russian-speaking hackers knock US state government websites offline | CNN Politics

Russian-speaking hackers on Wednesday claimed responsibility for knocking offline state government websites in Colorado, Kentucky and Mississippi, among other states — the latest example of apparent politically motivated hacking following Russia’s invasion of Ukraine.

Continue reading on CNN

Russian hackers reveal list of American targets for attack

The alleged attack by Killnet temporarily knocked out several government websites on Wednesday.

Continue reading on Newsweek

Montenegro wrestles with massive cyberattack; Russia blamed

The coordinated attack, which started around Aug. 20, crippled online government information platforms and put Montenegro?s essential infrastructure at high risk.

Continue reading on NBC News



Source: TG Daily – Russian-Speaking Hackers Target US Websites In A Series of Cyberattacks

Musk’s Post on Russia-Ukraine Peace Plan Triggers Uproar

Elon Musk Space Elon Spacex Tesla Technology

Tesla Chief Executive Officer Elon Musk drew the wrath of Ukrainians from the president on down for Twitter posts urging Ukraine to seek a negotiated solution to the invasion by Russia and to cede Crimea for good.

Musk also launched a Twitter poll asking citizens of occupied areas of eastern Ukraine recently annexed by the Kremlin — plus Crimea, which Moscow took in 2014 — to decide if they want to live in Russia or Ukraine.

President Volodymyr Zelenskiy responded by posting his own poll to Twitter asking his followers if they preferred an Elon Musk who supports Ukraine….or Russia.

Musk’s plan to end Russian war infuriates Ukraine on Twitter

Elon Musk has gotten into a Twitter tussle with Ukrainian President Volodymyr Zelenskyy after the tech billionaire floated a divisive proposal to end Russia?s invasion. The Tesla CEO, who on Tuesday revived a $44 billion deal to take control of Twitter, argued in a tweet that to reach peace Russia should be allowed to keep the Crimea Peninsula that it seized in 2014.

Continue reading on AP NEWS

Elon Musk’s SpaceX sent thousands of Starlink satellite internet dishes to Ukraine, company’s president says

Elon Musk Stole My Old Plan for Peace in Ukraine. Too Bad It Doesn?t Make Sense Anymore.

These ideas might have worked in March. Now, not so much.

Continue reading on Slate Magazine



Source: TG Daily – Musk’s Post on Russia-Ukraine Peace Plan Triggers Uproar

Why do we need videoconferencing?

video conferencing

Video conferencing in modern conditions is not just a video phone connecting several sites, as some might think. Everything is more difficult. It is a whole computer technology that allows people to hear and see each other, exchange information online, and process the information received.

Videoconferencing systems have made this possible. To conduct a video conference, you need to have the right equipment and the ability to connect through various communication channels with your interlocutor (including satellite), which would meet the necessary requirements.

Today, many programs offer video communication between people, which allows you to quickly resolve issues. Among such applications, the iMind platform can be distinguished, which provides a wide range of services for its users.

Can you do without video?

Why do we need video conferencing at all, while there is telephone conferencing? According to research, about 80% of information a person perceives through the eyes. Therefore, video conferencing is a very useful thing and the scope is quite wide.

Video conferencing is essential for conducting various selectors among managers. Doctors can hold an urgent consultation, which will involve specialists from all over the world. Video conferencing can be used to:

  • training seminars;
  • trainings;
  • master classes.

Also, videoconferencing can be useful in security systems and in a number of other industries.

Video conferencing is a common thing for most people. A few years ago, videoconferencing was a hard-to-reach technology: in order to use it, you had to buy an expensive camera, conduct high-quality Internet, which was available to very few. Now a lot has changed – and the Internet has become faster and the webcam has become cheaper, and almost every laptop and smartphone has a built-in front camera. In general, videoconferencing is now available to everyone, and it significantly expands the horizons of human capabilities.

At first glance, this technology does not look shocking – there is a camera that can show in real-time, and you can chat. But if you think about it? Humans have essentially eliminated distance as a factor that divides society into small groups. Today, video conferencing erases all facets of problems, sharing knowledge, building a business, communication is easier than ever. Moreover, the quality of video communication is increasing every year and  the effect of the presence of interlocutors located far from each other is already felt.

Videoconferencing brings many benefits to society. First, physical boundaries lose weight. You can communicate with representatives of other countries absolutely freely, regardless of kilometers and customs checkpoints. It, in turn, makes the whole world more united, closer, and the development of technology leads to an increase in mutual understanding among people.

Secondly, videoconferencing opens up new business opportunities. All business, regardless of industry and scale, is built primarily on communication between people. It is the basis, the foundation on which all industries are built. And today, this foundation has become much easier to build.

Videoconference made it possible for businessmen to communicate easier and cheaper, to avoid exhausting and costly business trips. Today it is enough for a businessman to make a video call, and he is already in touch with his partners, colleagues, regardless of the distance. Videoconferencing has already become an essential tool for businesses around the world.

What are the disadvantages?

The disadvantages of video conferencing are not obvious. It is quite possible that the spread of new forms of communication will lead to changes in society. Videoconferencing can be a driver of change. Is it good or bad? Let each answer this question independently.

Written by Adam Eaton



Source: TG Daily – Why do we need videoconferencing?

Five Stages of a Software Development Life Cycle

Software Internet Webdesign Web Design Planning

Developing software is a complex task, and many details should be kept in mind so you don’t miss something important. As technology progresses, software developers need to correspond to the latest advances to develop really efficient software. It is not easy but in the end, the quality product is always worth the resources that were put into its development. 

Technology helps save time and other resources, but detailed work has to be done in order to create a robust solution. But what does it take to make a high-quality product, and what are the cycles of software development consists of?

Let’s analyze the process of software development life cycle and have a look at some software development trends to get a bigger picture.

What is the Software Development Life Cycle (SDLC)?

The software development life cycle is a term that describes the main cycles of software development consists of, and the general pattern it has. The structure of the cycle allows developers to have a reliable plan which will allow for the software to be effective. Planning also saves time and helps organize the whole process. Hard work combined with lots of analysis contributes to creating software with efficient functions and high-quality design. Such products serve longer and can help businesses achieve their goals.

All of the stages in this cycle are interconnected, and each of them provides the result for the next one. The final stage summarizes everything as the implemented software is tested.

So, what are the five stages of a software development life cycle?

Analysis Of The Requirements

This is the most important stage of the whole process of software development. Before you start developing software, all of the company’s requirements need to be analyzed. This is why starting a discussion with various specialists is helpful. All the sides of the process of development need to be understood and addressed. Otherwise, there will be misunderstandings right at the beginning of the process of development, and it will lead to further complications. By getting a picture of what purposes is software needed, what are the conditions of utilization, who will use it, and what is the desired outcome of this utilization, you may get to know what general requirements for software the company has. Also, knowing the purposes helps understand how the company’s professional goals can be achieved in an easier way and how a digital solution may help.

If you have a clear plan of what the process of usage will look like, it will not take you much time to design the product and know the particular details that are required for the software to be efficient. This analysis is also necessary for the definition of quality requirements and effective risk management. 

The main goal of requirement analysis is to make sure the goals are clear, and developers have a clear picture of what the software will be used for and how to serve the company’s needs well.

Design Development

The design stage defines all of the next stages of effective software development. You need to have a clear picture of what a solution should look like and what the components will be. 

The “Requirement Specification” document, created in the first stage of the process, has a special guide for developers. Both the software and system design are developed according to these instructions. This is important in order to define what parts of the design are especially important. The different parts of the product which are created during this stage are:

  • various parts of the system;
  • what hardware is required;
  • the entire architecture.

The result of the design development constitutes the final product and is used in the next stage to make enhancements. After the process of designing, the coding will take place. The design plan is created by lead developers and technical architects, and all of the main requirements are incorporated into it. The process is somehow similar to company website development, as it also includes robust architecture and detailed structure. Knowing the structure allows us to easily navigate the process of the creation of the product.

Implementation

The process of coding itself is done in this part of the software development life cycle. The necessary data is created and placed in a database by an admin, and the interface is created. The design specifications allow developers to produce code that is used in the next stage of the cycle. This stage of the cycle is the longest and it takes the most effort. But in addition, this stage is critically important as it makes the core of the software development. IT web development companies spend a lot of resources on this stage.

This stage also implies many changes can be made so the developers need to be prepared for high pressure. But quality execution of the process of coding prevents potential mistakes in the future.

Testing

Testing is required in order to know if the product meets all of the necessary requirements and if nothing is missing. In order to know the readiness of the software, various methods of testing are being implemented, including system testing, unit testing, integration testing, and so on. 

After these kinds of tests, you can evaluate whether the software is suitable for the desired requirements and is ready to be used. Testing is essential to understand the weak parts of the products and prevent potential issues as there is always a chance for some mistakes to appear.

Maintenance

This is the final stage in which the software is ready to be utilized. If all of the requirements of planning and design are met, then the product is suitable for the company’s needs and may perform all necessary functions. 

This stage is also critical for the whole process of development as the practical usage of the product may indicate what the potential issues are, and what are the weak parts of the software. Even QA testing does not prevent additional problems from occurring. When noticed, these problems may be addressed by the marketplace development agencies and all of the requirements met. Routine maintenance and regular upgrades are essential for keeping the product high quality.

Conclusion

The development of software is a complex and detailed process, and there are many aspects that need to be seen and enhanced during its various stages. Good software requires thorough planning, elaborate design, and many hours of coding, and after that, a product may be utilized. In order to create something that will serve for a long time and be effective in its work, many minor stages have to be finished and the final outcome enhanced. Even after the software is created, new updates need to be constantly performed.

In order to stay up to date, developers need to follow the latest technological trends and use quality tools. By combining technology with analysis and research, professionals can create a really robust solution.

Many stages of the software development life cycle may seem difficult, but all of them are necessary for quality software. Mistakes are inevitable in this process, too, so it is necessary to learn from them and use the experience which was gained in developing software for better results in the future. 

Written by Adam Eaton



Source: TG Daily – Five Stages of a Software Development Life Cycle

The Upcoming Stumble Guys of Web 3! Seize the Initiative of Puffverse – PuffGo

Puff

The Rise of Party Game

Stumble Guys of Kita Games has finally reached its breakthrough two years after its launch, with its monthly revenue exceeding 100 million RMB. It is a battle royale party game, racing up with 32 players dashing through chaotic obstacle courses. Run, jump, and dash to the finish line until the best player takes the crown!

Party Game is definitely one of the main props in the game genre in the current mobile game market. As GameLook has reported, the Party Game is the one and only category that has held up in the face of declining overall U.S. mobile Game revenue this year. In August this year, the authoritative gaming industry Rollic also gave a very optimistic outlook for the future development of the Party Game category.

For instance, Flappy Bird, which can’t be missed when it comes to the Party Game, is the originator of the Party Game and was developed by a Vietnamese named Don Nguyen. This is a user-friendly game and is so simple that everyone can get on it instantly even without focusing on the game wholeheartedly. This type of easy-but-additive mode attracted massive players with over 50 million downloads within a short period of time. From then on, the Party Game mode has been set. Party Game is destinated to go viral on Web 2, as the multiplayer model and social fever are the main factors that reinforce its popularity, and what comes along is the skyrocketing discussion and sensation. 

The Coming of the Party Game in Web 3

Web 3 games have incorporated economic elements into the game, giving players the ability to generate, collect, earn, and utilize tokens, from which they can gain actual income. Puffverse recently released details about its upcoming Web 3.0 gaming project, PuffGo. As a metaverse that is powered by an outstanding development team, PuffGo is only a small part of the whole ecosystem, but they chose PuffGo to be the first one to introduce Puffverse to the public.

In terms of PuffGo’s storyline, the Puff team has spiced up the characters and story for fun and interactivity. To have a sneak peek, it’s an interlocking story in which all characters interact with one another in some way. The story starts with the missing of the Demon King and the invasion of the Demon King City by the vicious Moru evil group, which is led by a cunning and self-centered character – Mr.B. All the main characters are related to the invasion somehow. They all have unique personalities. 

Skill to Earn

While many existing Web 3 games are primarily webpage games, a comprehensive metaverse world with a content-rich storyline utilizes economic features for players to have fun, as well as integrate with Web 3 assets.

The game contains a wealth of level themes and gameplay and offers a variety of matchplay modes to choose from. Players can collect game characters with different appearances, personalities, and abilities, and change them into various costumes. In addition, players can take advantage of the specialties of different characters to participate in a variety of level plays. Each character comes with abundant skills and strategies that can be used wisely to compete for higher ranking and better resource rewards. In terms of level setting, players experience things distinctively depending on whether they have NFT character assets or not. The modes differ in the number of participants, the way to form a team, and the rewards.

Players can improve their individual combat capacities or team attributes with a reasonable combination to get the victory. This is how Skill to Earn works! The point is to really master the game, using your game experience, and actually finish the match. The game aims to provide players with an entertaining and differentiated game experience, allowing them to use each character with strategies to diverse theme-level games with varied costumes. Characters each have distinct personalities and figures, with their own storylines, unique skills, and attributes that play to their strengths in different application scenarios. With them, players can live and play carefreely and immerse themselves in the NFT game metaverse – Puffverse to have the best game experience in the crypto world.

About Puffverse

It is pretty smart to release PuffGo, the Web 3 gaming, as the first step of Puffverse, considering the heat Stumble Guys and Fall Guys have brought. But there’s more beyond the game itself.

The typical single-character settings were abandoned in favor of a character set that emphasizes “Who I am” by letting the users choose from different characters. Each character has their own character traits — Everyone is born different, and everyone can confidently be their best self — There is always a role whose character can be projected onto the individual player, triggering the maverick GenZ users’ empathy for “being brave and being yourself,” allowing them to accept and integrate into Puff’s world. Puffidentity is what matters the most, and it can be used across all products in Puffverse.

The main focus of the Puffverse team right now is mostly on creating a Disney-like metaverse. Puffs will be the character inside our metaverse. Users can access our metaverse through TVs, mobiles, virtual reality headsets, and other devices to play socially, competitively, and for fun. To earn profitably, players can even design their own UGC land in the metaverse. The team has extensive experience in the gaming industry with reliable partners and will keep enhancing the game’s quality and providing players with a variety of experiences.

Behind Puffverse

Puffverse, as the pioneer that combined web 3 with Party Games, attracted the attention of massive investors. Starting from 2019, due to the limits imposed by the epidemic, the majority of people worldwide are unable to travel as freely as before, but the socialization need still remains. With 300 million users worldwide, Xiaomi is a user-centered platform where quality content and platform will complement one another to create positive feedback in a loop. As a result, a team from Xiaomi decided to expand the investment in content production while choosing this track in the medium casual competitive category given the overall situation at the time (COVID-19). Hopefully, this will increase the viscosity of the platform and product users. The team also collaborated with Unity to develop the underlying framework of the product and designed a full set of Unity technology to realize the product planning. Unity provided all the technical support for the Puff project, and the best TA team in the world was brought in to realize the Puff product’s art and animation. The team created the base framework of the editor based on Unity to give the best action game experience to the users after it was introduced to the public.

The crypto world needs some excitement and freshness. Puffverse and its first product, PuffGo, will definitely be the most-anticipated gem among many.

Written by Awais Ahmed



Source: TG Daily – The Upcoming Stumble Guys of Web 3! Seize the Initiative of Puffverse – PuffGo

AiDot Inc. Obtained Level 5 Certification of CMMI-DEV V2.0

Software Developer Web Developer Programmer

After more than two months of audit and confirmation by the CMMI Institute of the United States, AiDot Inc. officially passed the software CMMI-DEV V2.0 certification evaluation and obtained the Level 5 certification, which is currently the strictest and most authoritative certification in the international software research and development field.

This is another important milestone for AiDot Inc. to obtain authoritative certification in terms of product quality, innovation strength, management, and service level.

What does CMMI level 5 company mean?

Passing the CMMI 5 level certification indicates that AiDot Inc.’s software technology research and development capabilities, project management capabilities, quality assurance capabilities, and solution delivery capabilities continue to maintain the international advanced level, and can provide customers with more mature industry solutions. Higher quality service is of great significance to the sustainable development of the company in the future.

Statistics data from the official website of CMMI shows that only 15.8% of the 10,987 CMMI-certified companies have achieved Level 5 certification. According to SEI (Software Engineering Institute of Carnegie Mellon University), software development companies that CMMI certified and actually implement the CMMI management model, have improved project estimation and control capabilities by about 40%-50%, and productivity has increased by 10%-20%. Product error rates fell by more than a third.

AiDot’s scope of this evaluation included 19 process time domains such as software requirements development management, technical solution implementation, project management (including planning, estimation, monitoring, risk management, quantitative management, etc.), Agile software development practices, root cause analysis, decision analysis, peer review, testing management, training management, configuration management, quantitative management, organizational process assets, and high-level governance. The evaluation team unanimously determined that AiDot Inc.’s software R&D process system complied with the high maturity specifications and requirements of CMMI-DEV V2.0, and successfully passed the 5-level certification of CMMI-DEV V2.0.

About CMMI:

CMMI is the Software Capability Maturity Integration Model, was commissioned by the U.S. Department of Defense and jointly developed by CMU-SEI (Carnegie Mellon University’s Software Engineering Institute), is the industry standard for measuring the capability maturity and project management level of software enterprises, and is also the most authoritative software capability assessment system in the world. CMMI maturity level 5 is the optimizing level, which is the highest level.

About AiDot: 

AiDot is a smart home platform that connects devices across brands and ecosystems. With AiDot, your home becomes a connected space that makes your life simpler, safer, and more entertaining. 

The AiDot app is the central part of the platform for controlling all smart home devices installed in your home. No matter where you are, you can control “Works with AiDot” devices in your home, including lights, switches, outlets, cameras, sensors, and household appliances, or create scenes and automation around your routines.

“Works with AiDot” (WWA) is a mark of interconnectivity across different brands and categories. You can easily control any product featuring the WWA label with AiDot app. Brands that have joined the AiDot ecosystem include well-known smart device brands, such as Linkind, OREiN, Mujoy, Winees, WELOV, Syvio, GoGonova, Ganiza, etc. 

For more information, visit: www.AiDot.com
Contact us: marketing@aidot.com



Source: TG Daily – AiDot Inc. Obtained Level 5 Certification of CMMI-DEV V2.0

Intel Innovation 2022 Keynote: The Return of IDF

CEO Pat Gelsinger is working to turn Intel into the power it once was. Intel has some significant advantages, like its own FABs, which remain entrenched in enterprise standards worldwide. But years of neglect for core businesses are hard to overcome. One of the first things to do is recapture the interest of developers. What is sad about this last is that Gelsinger is considered to be the founder of IDF, the old Intel Developer Forum that a predecessor, operating tactically, killed. Intel Innovation is the replacement for IDF. It’s smaller than any IDF I remember, likely because folks still don’t want to travel. This event is happening during a shift on interest between ARM and RISK V which could provide an interesting opportunity for Intel to recapture some developers.

This week, let’s talk about what Pat Gelsinger shared at Intel’s Innovation keynote.

Software-defined silicon enhanced

As noted above, Intel is in the midst of a turnaround with a greater focus on software in the future and what appears to be a shift towards providing services to other companies. Intel is using a term that the company avoided throughout most of its life, a term that has become so popular that back in the early 2000s, Microsoft and IBM, who both avoided that term, also pivoted to it. That term is “open.” It embodies where customers, huge enterprise customers, want their vendors to go and where those vendors are moving to embrace the related concepts aggressively.  

Concepts like Intel Shuttle aim to educate and train the next generation of engineers. Gelsinger announced that he is committed to expanding this kind of educational engagement significantly in the future.  

Intel GPUs

Gelsinger announced Intel’s new line of HPC GPUs, which are used as accelerators in high-performance data centers. This new area for Intel provides some interesting background. When Gelsinger last left Intel, he left due to a prior failed GPU effort. That wasn’t his fault, but resulted, like so many failures I’ve studied, from an unwillingness to recognize that the path Intel was on was the wrong one and working groups actively covering up that the technology didn’t work. Gelsinger was blindsided which gives him a unique motivation to get it to work this time. 

At Innovation, he also spoke about gaming, and while Intel’s new ARC GPUs aren’t going to be competitive with the best from AMD or NVIDIA, they will provide a low-cost alternative to their mid- and entry lines. The strategy is penetration pricing-based, which makes funding-related marketing problematic, but if Intel invests adequately, it should be successful. 

Intel has an interesting play here, providing a very affordable alternative to more expensive cards from competitors with what should be better upscaling and AI support for that price point. It will require some impressive marketing to get users to see what a great value this card potentially is, but the strategy, assuming the card performs as promised, is compelling. 

Intel developer cloud

Innovation is a developer conference, so Gelsinger brought one of the Intel developers on stage. Given that Intel has been defined by old white guys like much of the technology industry, the developer Gelsinger called up was fascinating. Ria is an 18-year-old out of Harvard who has been working at Intel for years and is a prodigy. It is essential to get young people excited about STEM, particularly engineering, and showcasing a young woman as a significant portion of Gelsinger’s keynote should draw attention to Intel’s diversity efforts, where they are sincerely trying to make a significant difference in this segment. 

In watching this, I was reminded of the problems of scripting a presentation like this with teleprompters and the need to assure people are comfortable with the technology. Ria, the youngest on the stage, actually took to the effort well and arguably did as well or better than her older peers. 

They walked through some of Intel’s AI modeling tools focused on Computer Vision and AI modeling. They presented a relatively simple but impressively capable Intel AI creation tool. The example they gave was targeted at farming which is often at the low end of the scale for technology innovation but is an area where there is currently a massive effort to create solutions that avoid pesticides and help create crops that are more resistant to climate change to help avoid what would otherwise result in worldwide famines. It showcased how farmers, with little training, could train AI models themselves to increase farm yields potentially. 

Gelsinger announced the launch of Intel GETi (available in Q4 this year) to help enterprises create and modify AI models. The related demo with Chipotle used PreciTaste and OpenVINO in combination with Intel’s AI tools to optimize restaurant operations and reduce operating costs. I’ll be happy if they can make Chipotle’s chips edible (our local store puts so much salt on their chips they’re practically inedible).  Given that this is computer vision-based, assuming the cameras can see the amount of salt, they could fix my problem with our local Chipotle.  Currently, this technology is in a limited test, but they announced they would be rolling the technology out to their stores shortly.  

Gaming

Intel was once heavily involved in gaming, sponsored LAN partie,s and aggressively supported video game companies. Gelsinger took the company back to those roots and showcased a game developer, Inflexion Games, on stage using the Unreal Engine. She was able to run up to eight game instances at the same time for debugging and scene modification. The game being demonstrated was Nightingale, which appears to be an RPG steampunk fantasy game; it looked very realistic.  

13th Generation Intel Core

Gelsinger announced Intel’s 13th generation platform (due next month to market) and argued it has the most robust single-threaded performance of anything on the market. It will be interesting to match it up against the new Ryzen processors that AMD also announced this week (I have an AMD system in test, which is impressive, setting a high bar for Intel to overcome). They promise 6 GHz out of the box at the top of the line, which is an impressive performance. 

Text to image

This technology, which I first saw at an NVIDIA GTC event years ago, allows a person to describe a picture which the computer then creates. I’m looking forward to using GAUDI2. Both my mother and my first stepmother were artists, so you’d think I would be able to draw, but to my eternal embarrassment, my art skills suck. Gelsinger also announced and showcased a GAUDI2 accelerator which did a decent job of creating attractive pictures from descriptions.  

Samsung Extended Screen and Intel Unison

PCs are screen-constrained, and Samsung, Lenovo, Motorola, and others have been exploring foldable displays. What Samsun brought on stage was a slidable display. Just pull the edges of the display, and it grows from 13 inches to 17 inches. It reminds me of a future computer I saw in an old Gene Roddenberry TV series years ago. It was an impressive demonstration, and I can hardly wait to see one of these screens in person.  Coupled with this new display, Intel showcased a new feature called Intel Unison which featured single-tap connections to peripherals, and they showcased it working with the Samsung Slide display. Acer, HP, and Lenovo will be the first to market with this option.  

Silicon optics 

This USB-sized connector used fiber optics to create an extremely high-speed fiber optic connection. The implication is that future USB optical optional connections provide wired networking speeds vastly faster than we can get today.

Wrapping up:

To close out the talk, Linus Torvalds came on stage. Torvalds is considered the father of Linux and one of the founders of the Open-Source movement. He created Linux on Intel technology, and his initial effort resulted because he couldn’t afford to buy UNIX, so he wrote his OS, which has grown to become one of the true industry powers primarily driven, at least initially, by volunteers.  It is interesting to note that Torvalds doesn’t consider himself a visionary but a plodding engineer who works on current problems but doesn’t concern himself much with what will happen decades in the future. Gelsinger gave Torvalds the first-ever Intel Innovation award.   

In closing, Gelsinger gave us a near-term peek into the future, one that promises a more open, more agile, and more user-focused Intel. 



Source: TG Daily – Intel Innovation 2022 Keynote: The Return of IDF

OKLink to realize Web3 goal by strengthening multichain explorer strategy

blockchain bitcoin cryptocurrency business technology

Blockchain data and information service provider OKLink has strengthened its product development this year by releasing a series of upgrades to its product line, including several remarkable changes to its multi-blockchain explorers. These are viewed as critical steps that could help realize its parent entity OKG’s ambition of becoming a Web3 conglomerate.

A highlight of OKLink’s blockchain explorer pre- and post-Ethereum Merge

“We are confident of becoming a hub in the Web3 era as demand for blockchain data has increased substantially along with the constant development of the crypto world,” said Hermione, head of the multichain explorer product team at OKLink.

Hermione’s confidence is not groundless. Not long ago, she and her teammates witnessed a growth of four times in website traffic around the Ethereum Merge on Sept. 15. “We had prepared a countdown page for the Merge pretty early so that our users could always know when it would happen,” added Hermione. As the Merge drew near, the team’s efforts started to get noticed when more and more people began to post screenshots of the page here and there.

Another thing that Hermione felt delighted to talk about was that OKLink was the first to launch a reliable EthereumPoW (ETHW) explorer in the industry and is now the only official provider of blockchain explorer for ETHW, a forked blockchain of Ethereum after the Merge.

User-driven design and globalization: OKLink’s strategy for its multichain explorer development

It was no accident that the market gave positive feedback on OKLink’s products. “We’ve made tangible progress in developing and honing the multichain explorer this year due to our adopted strategy,” explained Hermione. “When the opinions from our users value more in the decision-making process, we’re encouraged to strive to make blockchain data understandable and accessible for a wide and varied audience interested in blockchain and crypto.”

The multichain explorer developed by OKLink incorporates a multitude of different blockchains into one search engine. Users can filter blocks, transactions, and content by various criteria over blockchains. It currently supports 18 public blockchains, all of which are of broad concern in the market: Solana, Polygon, and Avalanche, to name a few. Moreover, at least eight more explorers for public blockchains, such as Optimism, Arbitrum, Fantom, and Aptos, are waiting to roll out. 

Besides, the open APIs provided by OKLink’s multichain explorer have granted developers and researchers full access to the data contained on four blockchains. All blockchain explorers will support API access in the future. The API feature may aid numerous analytical queries, such as filtering, sorting, and aggregating up-to-the-minute blockchain data. Businesses could also benefit from connectivity and query options that tailor and aggregate meaningful data in many ways to provide the best product for traders, custodians, exchanges, and private investors at fast speeds and reliability. 

Furthermore, for the convenience of users from around the globe, OKLinks multichain explorer could switch between 12 languages. And also, blockchain data is mostly well visualized in charts or diagrams to match users’ needs for comparing trends across blockchains. 

The new normal: Multichain explorer as a vital entrance to the Web3 world

For any crypto market participant, whether a trader, a crypto wallet service provider, or a crypto exchange, a blockchain explorer would be a must to understand what is happening on the blockchain in real-time.

There has been a broad consensus that search engines are now the critical Web2 infrastructure. And we are all getting used to finding data or information by “Googling” around. While in the era of Web3, most data would be stored on the blockchain, a ledger that contains all the transactions ever processed. Consequently, finding one particular data entry would be like looking for a needle in a haystack. The easier way to examine and explore on-chain data and information would be to use a blockchain explorer. This would also gradually become Web3 users’ new normal.

With the ongoing development of the blockchain industry and the introduction of new techniques (especially the interoperability between blockchains), a significant trend has formed where the multichain explorer is becoming more and more popular. 

Just like dining in an all-you-can-eat restaurant, one may access data and information from various blockchains at the same time through the help of a multi-blockchain explorer. Furthermore, the service provider may add additional functions and features to the explorer, such as in-depth research and analysis of the crypto market. 

About OKLink

As OKLink’s parent company, OKG is one of the earliest blockchain companies founded in China. It has now developed into a conglomerate and a leader in the blockchain industry. Established in 2013, OKG has been committed to the research and development, and commercialization of blockchain technology. OKLink has been one of OKG’s subsidiaries dedicated to blockchain data and information services since 2018. Visit the website and Twitter for more information.

Twitter: https://twitter.com/OKLink



Source: TG Daily – OKLink to realize Web3 goal by strengthening multichain explorer strategy

Lenovo’s ThinkSystem at 30: System, Software, Security and Sustainability Innovations

It can be challenging to celebrate longevity in a tech industry obsessed with newness and youth. But decades-long success is difficult to disagree with, especially when a company steadily progresses in market and technical leadership. In the case of Lenovo’s Infrastructure Solutions Group (ISG), the celebration in question is the 30th anniversary of the launch of the company’s ThinkSystem servers. 

To mark the date, Lenovo announced Infrastructure Solutions V3 which it described as “the most comprehensive portfolio enhancement” in its history. The new offerings include ThinkSystem, ThinkAgile, and ThinkEdge servers and storage systems, enhanced ThinkShield security features, a new XClarity management platform, and sustainability-focused solutions, like next-gen Lenovo Neptune warm water-cooling and carbon offset services. Let’s consider Lenovo’s Infrastructure Solutions V3 offerings and what they mean for the company and its customers and partners. 

The path of ThinkSystem innovation

How did the ThinkSystem reach its 30th anniversary? In 1992, when the platform was owned by IBM, the company launched the PS/2 servers which leveraged the same Intel silicon and many of the same technologies and features as IBM’s PS/2 PCs. That name was replaced in 1994 with PC Servers and then a few years later by Intel-based xSeries and System x solutions in IBM’s evolving server branding strategy. 

Lenovo purchased IBM’s PC business and portfolio in 2005 and, nearly a decade later (2014), bought IBM’s System x server business, organization, and intellectual property. As they had during the IBM PC acquisition, some competing vendors attempted to cast aspersions on Lenovo’s server deal. Just as it did in that earlier instance, Lenovo proved doubters wrong by investing in continuing technical and market innovations that helped the company achieve leadership in areas like supercomputing, high performance, and hyperscale computing. 

Equally importantly, Lenovo’s focus on infrastructure IT helped the company reach its goal of being a fully rounded vendor of integrated business solutions that extend from client devices to data centers to the furthest edges of corporate networks to the cloud. 

Lenovo’s Infrastructure Solutions V3

Technical details for the new ThinkSystem, ThinkAgile, and ThinkEdge solutions were thin on the ground, in large part because they will leverage next-gen silicon and other technologies that haven’t been publicly disclosed. That said, the new Lenovo solutions are designed to appeal to organizations from small SMBs to large hyperscale players. They include:

  • New ThinkSystem servers in high volume rack and tower configurations, offerings for mission critical workloads and high-density environments, and flash storage systems. The new solutions will incorporate new Intel Xeon, AMD EPYC, and Arm-based CPUs, as well as AMD Instinct and Nvidia GPUs and Nvidia AI Enterprise software
  • New ThinkAgile hyperconverged infrastructure (HCI) offerings that are pre-integrated with Microsoft (ThinkAgile MX and ThinkAgile SX), Nutanix (ThinkAgile HX), and VMware (ThinkAgile VX) solutions. Among these are three new Lenovo Microsoft Azure Solutions: 1) SQL for AI and Machine Learning (ML) Insights, 2) Backup and Recovery, and 3) Azure Virtual Desktop. 
  • New ThinkEdge servers featuring Intel Core & Intel Xeon processors, NVIDIA Jetson Xavier NX embedded solutions, and Arm processors. The new Lenovo Open Cloud Automation (LOC-a) V2.5 is designed to securely authenticate and activate ThinkEdge AI servers via a phone, accelerating business insights. 
  • Lenovo XClarity One, a new open cloud software management platform that offers a single portal for managing Lenovo’s TruScale Infrastructure-as-a-Service (IaaS), Management-as-a-Service (MaaS), and Smarter Support functions. XClarity One is designed to simplify how customers manage IT orchestration, deployment, automation, metering, and support processes from the edge to the cloud. It will also provide visibility into infrastructure performance, usage metering, and support analytics.
  • Enhancements to Lenovo ThinkShield Security, including Modular Root of Trust which bolsters on-chip tamper-detection and monitoring to help customers detect and recover from cyberattacks and digital compromises. In addition, Lenovo System Guard incorporates advanced hardware monitoring to enhance the security of systems during manufacturing, delivery, and deployment. 
  • The fifth generation of Lenovo Neptune Direct Water-Cooling technology will be available on a broader range of servers. This feature uses loops of warm water to cool systems more efficiently than conventional air, enabling customers to reduce power consumption by up to 40 percent.
  • Sustainability-focused solutions, including Lenovo CO2 Offset Services which allows customers purchasing select ThinkSystem servers to offset emissions by supporting United Nations climate action projects. Lenovo’s new TruScale Sustainability services offer pay-as-you-go utilization that helps prevent over-provisioning and reduce energy consumption. Lenovo’s Asset Recovery Services are designed to simplify end-of-life asset disposal. Finally, Lenovo noted that innovative packaging designs, such as shipping servers pre-installed in racks have saved over 3.5 million pounds of cardboard to date.

No pricing or availability details were provided in the announcement. 

Final analysis

Data center product portfolios tend to obscure the complexity of IT market evolution and how vendors respond by developing new commercial offerings. The increasing sophistication and segmentation of HCI is a good example of this. While initial solutions focused on businesses’ increasing interest in and use of software-defined IT that virtualized conventional system elements and functions, contemporary products like Lenovo’s ThinkAgile servers are designed to maximize the performance and value of specific partner applications and solutions and related business processes. 

Similar incremental developments are commonplace in virtually every corner of the business compute market, including on-premises corporate data centers, off-premises clouds, and hyperscale infrastructures, and combined hybrid and multi-cloud environments. Since innovative vendors manage numerous and various kinds of product and process evolutions simultaneously, it can be difficult to perceive the extent of the IT “forest” due to the number and variety of the solution “trees.” 

As a result, an event such as Lenovo’s 30th ThinkSystem anniversary can provide valuable insights into how a vendor has progressed and what it has accomplished over time. In the eight years since the System x deal with IBM, Lenovo has grown from a client computing powerhouse into a mature provider of end-to-end business computing solutions, including market-leading HPC and hyperscale systems. 

In essence, a portrait of Lenovo a decade or so ago would have depicted an energetic and ambitious individual or small group of colleagues. In contrast, the new Lenovo’s Infrastructure Solutions V3 can be viewed as a family of interconnected members, all of them working toward both singular and deeply integrated goals. Considering the achievements and new offerings summarized in the Infrastructure Solutions V3 announcement, it will be fascinating to see what Lenovo accomplishes during the coming decade. 

Written by Charles King, Pund-IT®



Source: TG Daily – Lenovo’s ThinkSystem at 30: System, Software, Security and Sustainability Innovations

Jensen Huang’s GTC Keynote

Show Off Aurora Technology Of The Future Goal Pursuit

GTC is NVIDIA’s premier conference. NVIDIA is at the forefront of a number of coming waves ranging from autonomous machines (robots and transportation vehicles), metaverse creation, digital twins, and one of the major drivers for evermore capable AIs. Oh, and NVIDIA’s gaming focus remains strong, so it’s also one of the few firms making our video games much more realistic. What makes the company stand out to me is that it uses its technology throughout the conference keynote so that it is not only a showcase of announcements but shows how the products announced can be used. 

Let’s talk about some of the highlights from this year’s keynote.

RTX 4080/90

This was a little depressing for me because I just installed an RTX 3080 in my VR test system. At GTC, NVIDIA announced the 4080/90 which are significantly more powerful than my 3080. Image quality is enhanced, they are a better platform for developing metaverse elements, and the visual capability of these cars is nothing short of amazing. At around $900 for the 4080 and around $1,500 for the 4090, they aren’t insanely expensive.  

The quality of games like Flight Simulator has a level of realism to them that is impressive and a little upsetting, upsetting because I don’t have either card, yet.

Omniverse

NVIDIA’s Omniverse platform is the leading development platform for the applied metaverse. Updates now embrace the entire product lifecycle from creation to making. It’s effectively a full 3D production pipeline that can be shared across an organization enabling teams, which are often geographically disbursed, to collaborate on the creation of defined products, buildings, advanced systems like robots, and entertainment media. 

This appears to not only be revolutionizing the design and creation of things but media, as well, allowing TV and movie creators of the future to develop high-quality content with a fraction of the budget and staff that otherwise would be required. 

Digital twins are a key part of Omniverse that allow the initial modeling and the eventual highly automated management of these systems, systems that encompass advanced factories, smart buildings and cities, and, I expect, battlefields. 

GTC showcased an implementation that used AR glasses to allow employees on the ground to blend metaverse elements using Omniverse to help navigate real buildings and do repairs at scale using these next generation tools. As I write this a large hurricane is driving toward the East Coast, and I can imagine a future implementation of Omniverse where first responders can see an overlay of what was, and metaverse creation using drones of what is, in order to identify likely places where people may need rescue.  And, after the event, allowing those that have lost their towns to again visualize what once was, so they don’t feel like they lost where they grew up as a result of the disaster.

NVIDIA showcased its GDN (Graphics Delivery Network) that, along with the Omniverse Cloud on AWS, will bring this technology to the world. Backed by NVIDIA’s technology, including uniquely configured and highly focused servers and workstations, this is how many of us will envision the future before it is created.  GTC showed how Omniverse was used to create the latest Rimac supercar. A car that will redefine high performance. 

Thor takes over autonomous cars and robots

NVIDIA Drive Thor just replaced the prior autonomous driving platform which utterly outperforms the prior technology. The platform can run QNX, Linux, and Android simultaneously and covers all of the processing needs of the car, from driving to entertainment. Much like cloud implementations, these functions can be kept virtually separate and secure. Drive SIM, NVIDIA’s training simulator for autonomous driving, has been significantly enhanced to create simulation scenarios on a global scale.  Using a Nuraral Reconstruction engine, simulations can be infinitely modified to take into account even the most unlikely events, like a sudden snow storm in Florida (hey it could happen). Watching these simulations is amazing because they increasingly look like real roads and cities, and I wonder how long it will be until this tool will be used to present evidence in courtrooms or to convince city administrators to fix endemic traffic and safety issues.  

It’s already being used to design better automotive cockpit controls and vehicle entertainment systems. How often is the same technology provider engaged in all parts of the creation of something as complex as an autonomous car?  This integration into all parts of the creation and operation of future cars should lead to faster advances, higher quality products, and far fewer mistakes like the old Pontiac Aztec.  

NVIDIA Drive Orin is the brain of NVIDIA’s autonomous vehicle effort, with NVIDIA Jetson being the variant that is targeted at autonomous robots. NVIDIA Jetson Orin Nano, an accelerated platform for the future of autonomous robots using the Metropolis platform (there must be a lot of Sci-Fi fans, given the product names, at NVIDIA) was also announced. The most interesting part is the application of this technology in medical Instruments that are software-defined and powered by AI. This should significantly improve the quality of diagnostics from these systems, particularly those that can use Computer Vision and increasingly be used for surgical robots which will be more precise, make fewer mistakes, and be more reliable, particularly in remote areas where qualified specialists aren’t available. AMRs, or Autonomous Mobile Robots, will revolutionize last-mile delivery, warehouse operations and provide help for the disabled.  

HPC

NVIDIA Triton is at the heart of much of the world’s HPC efforts. When it comes to finding patterns and relationships, according to NVIDIA, NVIDIA Triton is the preferred product which uses Deep Learning and the leading frameworks. The implementation that caught my attention is real-time image processing for live video. For those of us that stream, this means that we’ll always have perfect lighting and look our best even if we didn’t get much sleep and can’t afford a makeup artist (or they didn’t show up).  

NVIDIA’s CUQUANTUM, in use by leading quantum developers, including IBM, provides improved development resources as the market moves ever closer to quantum supremacy. There was a long list of related tools that NVIDIA launched to address the coming waves of HPC-level AI and quantum computing, again showcasing that NVIDIA remains on the cutting edge for development tools and platforms for these coming disruptive computing waves, like language models to create ever more capable conversational computers and personal AI assistants that will increasingly redefine our lives. Chatbots become more aware, digital assistants more capable, and the emergence of digital companions, particularly for those working in isolation or those of us that have outlived family and friends, are advancing at an impressive rate. 

One of the more interesting announcements was NVIDIA BioNeMo targeted at hugely reducing the time needed to create future anti-virus responses and more effective cures and treatments for the diseases that plague us.  

Recommender engines

These are the utilities that push customers to the products and services they should be most interested in. When they work properly, they massively increase conversion rates and revenues because, rather than having to try to convince someone to buy something they don’t want, they push people towards products they already like.  NVIDIA’s Grace Hopper platform is increasingly favored for these systems with far greater potential accuracy and greater scalability. Interestingly, the Grace CPU, which is unique to NVIDIA, is at the heart of this solution and, I expect, will eventually break out of this implementation to do other interesting things in the future.  

Interactive avatars

Toward the end of the keynote, we were treated to interactive avatars. These could be cartoonish or photorealistic and can increasingly interact with users as if they were humans. Shifting languages on demand and still looking like they are speaking directly to you and being responsive to an ever-increasing breadth of questions using facial and hand expressions which eventually will be indistinguishable from real people. Deloitte was announced as the NVIDIA customer that is aggressively bringing this technology to market. 

Wrapping up:  

Fully simulated worlds, ever more capable AIs, recommenders that work, advances in HPC computers and quantum computing training, massive improvements in autonomous vehicles and robots, and a host of tools that will redefine how computers interact with us. Woof. It was like watching the next generation of the computer industry. I doubt any of us have more than the beginning of an understanding of how significantly this will change how we work, how we live, and how we interact with both the real world and the metaverse that will increasingly be indistinguishable from it. At GTC this year I again saw the future and, to be honest, I’m a bit overwhelmed. The future is coming for us, but at NVIDIA it is already here, and the rest of us are struggling to catch up. If you want to see the future, watch the GTC keynote this year.



Source: TG Daily – Jensen Huang’s GTC Keynote

Putin Warns The West He Is Prepared To Use Nuclear Weapons

Putin Politics Kremlin Russia Government

Vladimir Putin has said he is prepared to use nuclear weapons as he warned the west: “I’m not bluffing.”

Putin went on: “Those trying to blackmail us with nuclear weapons should know that the tables can turn on them.”

Asked to deliver a message to the people of Ukraine, Foreign Office minister Gillian Keegan said: “We’re by your side, we will help as much as we possibly can.” He said: “If there is a threat to the territorial integrity of our country, and in protecting our people we will certainly use all means to us – and I’m not bluffing.”

And to the Russian people, she said: “Look beyond your own media, it’s very clear what is going on in Ukraine.”

“He is continuing to completely misrepresent what is happening in Ukraine,” she said.

‘I’m Not Bluffing’: Vladimir Putin Warns The West He Is Willing To Use Nuclear Weapons

The Russian president issued the threat in a rambling TV address.

Continue reading on HuffPost UK

Ukraine war: Biden warns Putin not to use tactical nuclear weapons

The US president says Russia will become more of a pariah than ever if such weapons are used.

Continue reading on BBC News

Putin waves nuclear sword in confrontation with the West

WARSAW, Poland (AP) It has been a long time since the threat of using nuclear weapons has been brandished so openly by a world leader, but Vladimir Putin has just done it, warning in a speech that he has the weapons available if anyone dares to use military means to try to stop Russia’s takeover of Ukraine.

Continue reading on AP NEWS



Source: TG Daily – Putin Warns The West He Is Prepared To Use Nuclear Weapons

Effective Tips for Parallel Parenting

Family Divorce Separation Before Children Father

The concept of parallel painting is widely used by individuals who have been divorced or separated but want to parent their children through limited contacts and interactions. 

This parenting method is helpful in cases involving domestic violence and highly conflicting relationships where there is no possibility of maintaining healthy conversation regarding the children. Broder Orland Murray & DeMattie LLC law firm can help you pick the best option for parenting after divorce. 

Co-Parenting vs Parallel Parenting – What’s the difference?

The very concept of co-parenting involves cooperation and healthy communication by both sides. But in some instances, it gives opportunities to toxic individuals to abuse their ex-spouses and past partners. To combat this problem, parallel parenting is used to prioritize the well-being of children by taking safety measures and reducing conversations with the other parent.

It helps in distancing ex-partners without causing harm to the child and letting kids have their time with both parents. This helps prevent abusive conduct by setting necessary boundaries between both partners. The primary purpose of parallel parenting is to facilitate the emotional well-being of both partners while taking care of the child’s needs and requirements and protecting them from experiencing any conflicting situations. 

Here are some helpful tips which must be followed by partners practicing parallel parenting to ensure that the process is done smoothly. 

Draft a parenting plan

To avoid conflict, it is suggested to be well prepared and detail-oriented in creating a parenting plan. This helps in reducing the chances of arguments and thereby minimizing contact and communication between the partner. A well-drafted parenting plan helps reduce stress in the family and ensures that everyone is safe and on the same side. The parenting plan can include the time of visitation dates with the child and the duration. It can also involve canceled visits and their handling by both parents, as well as a preferred method of communication. The plan helps in discussing the amount of time the parent will be visiting The child and which parent is responsible for the doctor visitations and other functional works of the child. The plan can include the division of Financial requirements and responsibilities of the child and the limitations of both parents.

Hire a mediator

If there is too much conflict and bitterness between you and your ex-partner, there is an excellent chance that you will not be able to handle parenting on your own. In such cases, it is ideal to hire a professional mediator as they help make decisions and reduce disagreements between ex-partners through effective mediation. Appointing a mediator can help prioritize the needs of a child without compromising your personal space and safety. 

Avoid communicating when it is not necessary

Make sure to hold minimum conversations with your ex-partner and document every interaction so that you have proof in case your partner decides to mistreat or abuse you. 

Written by Spencer Calvert



Source: TG Daily – Effective Tips for Parallel Parenting

US Senators Pushes Bill to Declare Russia as ‘State Sponsor of Terrorism’

Ukraine Russia Heads Family Banner Flag Conflict

RICHARD BLUMENTHAL (D-Conn.) and LINDSEY GRAHAM (R-S.C.) unveiled new legislation on Wednesday that would circumvent the State Department and impose the dramatic declaration unilaterally.

Their latest move on the subject comes after President JOE BIDEN last week responded with a resounding “no” to a question about whether he intended to add Russia to the state terrorism list, a development that would please Ukraine.

Blumenthal and Graham indicated that they’re not yet sure how they plan to push their legislation through the chamber — whether as a standalone measure or a unanimous-consent request on the Senate floor.

Fox News

Graham, Blumenthal call for Russia to join list of state sponsors of terrorism, say crimes are ‘genocide’

Sens. Lindsey Graham and Richard Blumenthal introduced legislation that call for Russia to be added to the list of state sponsors of terrorism, countering the White House’s official position.

Read More

Reuters

Biden administration discussing new Russia measures with Congress

The Biden administration is discussing with Congress new economic measures to penalize Russia for its invasion of Ukraine, the U.S. State Department said on Wednesday.

Read More

Zelensky calls for Russia to be designated state sponsor of terrorism

It is necessary to designate Russia as a state sponsor of terrorism and strengthen sanctions, in particular due to its airstrikes on Ukraine’s energy infrastructure. — Ukrinform.

Read More



Source: TG Daily – US Senators Pushes Bill to Declare Russia as ‘State Sponsor of Terrorism’

California’s New $518.5 Million in Funding for Mental Health and Substance Abuse Services

Resilience Mental Health

In June this year, California announced that it would provide an extra $518.5 million of funding to support people with mental health and substance abuse problems, including those who are street homeless. 

The announcement comes as a response to the nation’s deepening mental health crisis, which has seen rates of drug overdose and suicide soar. From October 2020 to September 2021, over 99,500 people died from a drug overdose, an increase of 45% from the previous year. National overdose deaths involving cocaine rose by 44%, and those involving alternative psychostimulants (such as meth) rose 93% in two years.

In California alone, over 10,000 people died from a drug overdose in the past year, a 70% increase from the previous annual rate. Fentanyl accounts for 53% of those deaths, totaling 3974 deaths in 2020.

Experts argue that the pandemic has been a driving factor in these deaths. Job loss, school closings, clinic closures, reduced clinic hours, and other changes have left people isolated and without access to support. Financial and mental stress, along with a loss of social connections, have caused more people to turn to drug abuse. It’s also contributed to a proliferation of other mental health issues.

CARE Court Program

In response to the overdose and mental health crisis, California’s governor launched the CARE Court program. The CARE Court program aims to connect a person in crisis with a court-ordered Care Plan for up to 12 months or 24 months. It offers individuals a community-based set of services that reflects their cultural and linguistic needs, including short-term medications, recovery support, and access to housing. Proponents emphasize the importance of stable housing in a recovery plan – offering stable and long-term treatment is almost impossible when someone is living on the streets, in a tent, or in a car.

As part of the CARE Court program, the extra half a billion dollars of funding will provide treatment beds for over 1000 people at a time, as well as behavioral and other health services. It will help to expand mental health housing across California, reaching those who are most in need.

CARE Court Funding

CARE Court funding isn’t intended for everybody who is homeless or living with mental illness. Instead, it’s directed towards people with mental disorders that meet specific criteria, aiming to intervene before they are arrested or require inpatient treatment. It aims to put people who are suffering from treatable health conditions on the path to recovery – and a better future.

The grants will be distributed through the Department of Health Care Service’s Behavioral Health Continuum Infrastructure Program, with each county receiving a designated reward. Grants were awarded as follows:

  • Alameda County – $18,405,122
  • El Dorado County – $2,852,182
  • Humboldt County – $4,170,560
  • Kern County – $3,138,065
  • Los Angeles County – $155,172,811
  • Madera County – $2,035,512
  • Mendocino County – $7,711,800
  • Monterey County – $3,558,670
  • Nevada County – $4,458,799
  • Orange County – $10,000,000
  • Placer County – $6,519,015
  • Riverside County – $103,181,728
  • Sacramento County – $30,553,889
  • San Diego County – $30,874,411
  • San Francisco County – $6,750,000
  • Santa Barbara County – $2,914,224
  • Santa Clara County – $54,074,660
  • Solano County – $14,332,411
  • Sonoma County – $9,751,915
  • Stanislaus County – $33,369,900
  • Yolo County – $12,500,000

During the latest grant announcement, the Governor met with families who were affected by mental illness and homelessness. Many of their loved ones could be helped by the CARE Court program. The Governor listened to their experiences and set out the actions that California is taking to address the crisis.

He explained that the crisis on California streets is at a breaking point, with a growing number of residents struggling with substance abuse and mental health disorders – and ending up on the street. He highlighted the need for a clear change in strategy, describing how the new grants are a crucial step in reworking the state’s approach to homelessness and mental illness. 

Five Stages of the Care Plan

According to the CARE Court website, the novel approach will involve five key stages.

Referral

Individuals may be submitted to the scheme through a referral to the court by a family member, first responder, behavioral health provider, or any other approved party. Any individual with untreated schizophrenia spectrum or other psychotic disorder with limited decision-making capacities is eligible.

Clinical Evaluation

In the second stage, the civil court requests a clinical evaluation of the individual’s case and appoints a public defender and official supporter. After reviewing and approving the application, the court asks for the development of a care plan.

Care Plan

The country’s behavioral health team, participants, and supporters collectively develop the Care Plan, encompassing recovery treatment, medication, and a housing scheme. The Court reviews the plan and adopts it for up to 12 months.

Support

The county behavioral health team, the supporter, and the participant continually evaluate and improve the Care Plan as required. They may refer to a Mental Health Advance Directive if there are any future challenges. The program may be extended by another 12 months.

Success

By the end of a successful program, the participant will graduate from the plan with the mental wellness and social stability they need to continue to build their future. The participant can access continued treatment, support services, and community housing to promote long-term recovery.

Aims of the Care Plan

In full, the care plan aims to ensure that support and services are tailored to the unique needs of each individual. It works to coordinate social and medical resources – offering clinical treatment and housing simultaneously – to provide holistic and comprehensive support. 

Written by Rida Sheppard



Source: TG Daily – California’s New 8.5 Million in Funding for Mental Health and Substance Abuse Services

NVIDIA GTC 2022: Where Imagination Meets Technology

My favorite event to attend every year is NVIDIA’s GTC, or GPU Technology Conference. I would humbly suggest the conference be renamed since the show has evolved to be much more about the near-term future of technology across robotics, automobiles, metaverse, and AI. Next week the event will be held virtually again. It is well worth your time if you want to get a feel for what will be coming to market over the next two to four years because this event is a showcase of the closest thing we have in the market to real magic. 

By real magic, I mean our ability to make what we can imagine real, either in the metaverse or in the material world. 

Let us talk about some of what you will see at the event, and some potential surprises we will see at this or future GTCs. Oh, and before we start, if you want to sign up for the event, you can do that here.

Robotics

This is fascinating to me because, years ago at Dell Technology World, Dell had a futurist talk about robotics being the future for what was then, and now, the computer business, at least in terms of its market potential. Since then, Dell and every PC OEM, with the recent exception of Lenovo, has seemed to do everything possible to avoid that market.

NVIDIA is the exception, and its technology forms the foundation for current and future robotic development. From simulation and training in its Omniverse platform to the deployment of intelligent machines in a growing range of highly automated factories, farms, warehouses, and even homes. 

If you want to see what comes next in the technology that you will eventually use at work or in the home, these GTC robotics sessions are a must-see. Session titles include “A perspective on the future of robotics and its use cases in logistics and smart transportation,” “The next wave of edge AI and robotics,” and “Leveraging simulation tools to develop AI-based robots.”   

Intelligent Video Analytics 

Creating AIs that can see — and understand what they see — is not only critical to robotics but has broad applications to autonomous vehicles, security, safety, smart buildings, and smart cities. If you want to get a sense of how this technology will change how our working and living spaces will be able to become physically safer, more responsive to our needs, and less frustrating, there are a few sessions in this segment that look particularly interesting. 

The interesting sessions are a panel on “Accessing the value of infrastructure smart spaces,” which addresses smart buildings and cities, and “Tracking objects across multiple cameras made easy with Metropolis microservices,” which potentially addresses physical safety and convenience. 

Automotive

Automobiles are going through massive changes as they transform from ICE (internal combustion engines) to electric propulsion and from being driven by people to being driven by AIs with the promise of both higher safety and far more convenience. These changes may impact whether we even need to own cars, which are currently an overly expensive and underutilized personal luxury. You will find me watching a number of these sessions right along with you because cars, for me, are both a critical tool and one of my hobbies. 

Sessions to look for are “Building future-ready intelligence cars,” “Developing software defined vehicles for the new era of transportation,” “Elevating the passenger experience with intelligent in-vehicle infotainment,” and, for those new to this segment, “Introducing autonomous vehicles.” 

There is a lot of misinformation in the market about this technology. If you want to know what is going on and anticipate what you should look for in your next car, that last session alone should be incredibly helpful. Oh, and my old friend Dean Takahashi from Venture Beat is hosting a panel on the opportunity of the industrial metaverse with some heavy hitters from Magic Leap, Siemens, and Mercedes-Benz, which should be fascinating. 

Wrapping up: A hint of what is to come

NVIDIA’s GTC represents what I think is the best place to see what is coming from advanced technology, particularly regarding the application of next-generation tools like the metaverse and advanced intelligent robotics. But I was watching Seth MacFarlane’s Orville the other day (it is a fascinating series now on Hulu that, for me, reminds me of Star Trek at its best) and it struck me that one of the potential benefits of the advanced metaverse tools, particularly NVIDIA Omniverse, is that it could make shows like this both far better, in terms of special effects, and far less expensive to produce assuring they don’t end prematurely like Babylon 5, Brisco County Jr., and especially Firefly did.  

So, for me, this show goes beyond the practical applications of robotics, advanced AI vision technology, and autonomous vehicles into entertainment and visions of a future of even more compelling stories of metaverses that we would like to live in. If you can find the time, GTC is a must-attend event that takes me away from the dire news that increasingly otherwise fills my day and gives me hope for a better tomorrow. 



Source: TG Daily – NVIDIA GTC 2022: Where Imagination Meets Technology

How Intel’s Israel Development Center Is Driving the Future Renaissance of the Company

Show Off Aurora Technology Of The Future Goal Pursuit

Intel is in the process of a turnaround led by Pat Gelsinger who was mentored by the legendary Andy Grove. Gelsinger is arguably one of the strongest CEOs in the market. But Intel’s problems are deep and have accrued over decades. It has an operational history of being intolerant of mistakes and cultivating a cut-throat culture that has been particularly hard on female employees. To fix the company, Gelsinger is increasingly relying on Intel’s unique Israel Development Center (IDC) which is more tolerant of mistakes and more proactive in elevating women to engineering executive positions as both a framework for the new Intel and to better allow the company to execute during its transition into a more successful company and a far better place to work. 

I am spending the week learning about this unique resource and I am incredibly impressed with the quality of the work they do here. One of my favorite Intel executives (next to Gelsinger, who I’ve known for decades) was Mooly Eden who came out of this division.  

Let’s talk about how Intel’s IDC has become foundational to the creation of the new and improved Intel. 

Intel’s IDC

Intel’s IDC has been a major force in most of the big developments that have come out of the country. Not only was it where Intel built its first international FAB. In 1993, it was where video on a PC came to be. This development was followed by wi-fi for notebooks in 2003, dual-core processors in 2005, integrated GFX in 2011, setting the world speed record for DDR4 in 2020, and most advances in core technology have been developed there over the years. 

This is a fascinating organization both because it is so different from Intel and the technology industry in general and because of the culture in the country which aggressively favors collaboration and inclusion when compared to other parts of the world due to its unique beginning and nature.

Israel is a small country that exists under substantial risk since it’s surrounded by countries that are hostile to it. This fosters a sense of camaraderie and drives inclusion because everyone, regardless of gender, must work together to assure the country’s future. Once the current war settles, I expect Ukraine will develop a similar culture, but for now, Israel is unique in what is clearly a forcing function based on mutual survival. 

Diversity and inclusion for women

It is also where women in engineering have been incredibly successful. This is important because women in engineering are not only relatively rare but have faced massive issues of abuse in the industry. This has not been an issue at Intel alone. It’s a problem endemic to the overall technology industry that has proven almost impossible to correct. At tech companies, including Intel, you see old white guys on stage. Women tend to work in comms or marketing, not engineering. At Intel’s IDC, half the presenters were not only women but corporate or senior VPs. 

For example, they highlighted Karin Eibschitz-Segal, Corp. VP and GM of Intel Validation and Engineering, Shlomit Weiss, Sr. VP and GM of design and engineering, and Mandy Mock, VP and GM for the desktop, workstation, and channel group. At IDC, women do not have to break the glass ceiling. It has already been destroyed, providing a terrific example of what can be done at a modern technology company.

Willingness to tolerate mistakes

A few years back I met with the CEO of Ford who was proud of his directed change at a company that had been intolerant of mistakes and created a culture where no one was willing to take risks. However, he joked that while he was driving a more tolerant policy, if anyone screwed up Ford’s F-150 pickup, they would still get shot, leading me to believe he did not understand his own policy. He also did not seem to get my warning about not understanding Tesla, either, which led me to accurately predict he would not last much longer, and he did not. 

No one is perfect. If you shoot employees who take reasonable chances that fail, you will end up with a culture that is afraid to make decisions. I ran into this when I was at IBM researching why it nearly failed. I discovered that, at that time, executives were afraid to fix known problems because if they got it wrong, they would lose their pensions. That fear was foundational to IBM’s near failure.

Intel’s culture has historically been very hostile to executives that screw up even if it’s not their fault. Pat Gelsinger knows this personally as he was put in charge of Larrabee Microarchitecture, an earlier effort to take on NVIDIA and ATI on graphics. It had failed prior to Gelsinger taking it over, but the failure had been covered up, which resulted in Gelsinger being forced out not because he screwed up, but because the failure was discovered on his watch. I have often thought that this move was generated by a powerful rival that wanted to make sure Gelsinger never made it to CEO. 

Intel’s IDC recognizes that you cannot make an omelet without breaking eggs and that failures are the price of progress. If you do not tolerate failures, you will not make progress. Intolerance for failure and the practice of shooting executives who fail has led to the departure of some of the firm’s strongest executives, including its current CEO. IDC’s tolerance for failure comes directly out of the culture of Israel where, if the country had not been tolerant of failure, it would have crumbled years ago. They are such strong believers that their practices and beliefs are spreading through Intel, turning it into a more successful company and a much better place to work. 

Wrapping up:

Innovation, tolerance for failure, and diversity are all more evident in Intel’s IDC division than in Intel overall. Thus, Pat Gelsinger’s strategy of using IDC as a template for how to fix Intel is an excellent strategy for turning Intel into a more successful and powerful company than it currently is. IDC not only represents the new core for Intel’s future culture. It’s an example of how other companies might be able to address their own diversity and failure-averse policies that make them less competitive. By using Israel’s unique collaborative, innovative, and diverse culture, there is a potential path to make every tech company more agile, more inclusive, and a better place to work than it currently is. 

I believe that Israel may be the secret sauce to not only fixing Intel but fixing much of the technology market by helping firms change their cultures from competitive employee advancement that favors the biggest male jerk at the table, to one that favors a tolerance for failure, collaborative and inclusive advancement, and assuring a more balanced and equitable team. IDC is a fascinating division that I hope will be a template for the future of every tech company, particularly those that currently mistreat women and scapegoat risk-takers in the workforce. IDC assures that Intel is on track for its multi-billion dollar turnaround.



Source: TG Daily – How Intel’s Israel Development Center Is Driving the Future Renaissance of the Company