Platforms = End Game for IT

first_imgIs that headline over the top? Perhaps 🙂On the other hand, it’s obvious when you think about it and also nothing new.It’s the short answer to the question, “Chad, why are you incredibly pumped about being the new leader of VCE?”Let’s start by really understanding the word platform.In the Military domain: “…solid ground on which artillery pieces are mounted.”In the Political domain: “…a public statement of the principles, objectives, and policy of a political party.” In the Digital technology domain: “…a major piece of software, as an operating system, an operating environment, or a database, under which various smaller application programs can be designed to run.”The common pattern? A platform is a foundation, something you consume and build on versus something you decompose. In fact, the core value is that it is not decomposable. It must be used whole. When someone tries to decompose a platform, it loses its value, its purpose, its animus.In the past, the IT platform domains were primarily “server, network, storage, database, client, application, security.” Virtualization started the trend of mashing up server/network/storage by making the technical lines dividing those domains very blurry. Public IaaS clouds finished the job, making the dividing lines invisible. SaaS and PaaS models then moved the bar of platform further.In IT, the idea of a platform is nothing new. It’s always existed. So, it’s by definition the end game.What is changing – and disrupting massive ecosystems – is where platforms begin and end.Three reasons why I’m so pumpedReason OneStorage, compute and networking domains are commoditizing. Can anyone disagree? It’s not that there won’t be great innovations in sub-component stacks. There will be tons of new things like VMware NSX and Horizon, like EMC DSSD, XtremIO, and Neutrino. There will be things like VSAN and ScaleIO from VMware and EMC together and things like Cisco ACI and UCS Director.In open source land, the point solution innovation is overwhelmingly fast and furious. There will be innovations in the Apache Hadoop ecosystem and the Openstack ecosystem; there will be Mesos releases, Docker updates, and the VMware Photon Platform, EMC RackHD, REX-Ray and so much more. It can feel never-ending.Of course there will be tons of innovations by our competitors too! And YES all those ingredients are awesome. 🙂But, the game is shifting towards “Buy commodity vs. Build where you differentiate.”   I firmly believe that customers are thinking more strategically about where they buy vs. build, redefining where the new commodity line is drawn, which leads to…Reason TwoThe lowest common denominator of that new commodity line is convergence in consumption of infrastructure as virtualized pools of compute/network/storage with an integrated consumption and management model.This comes in 3 system-level architectures:Blocks – traditional virtualized system architectures; packaged for turnkey consumptionRacks – hyper-converged virtualized industry standard servers with software-defined storage and networking stacks; designed as a system to scale big; much simpler operational models than traditional stacks, scaling in simpler linear waysAppliances – hyper-converged virtualized industry standard servers with software-defined storage and networking stacks; designed as a system to start small; much simpler operational models than traditional stacks, scaling in simpler linear waysIn 2016, the people still building and trying to optimize their own stacks are wasting their time (with small exceptions that are very workload-centric). It’s not that they aren’t smart or capable. It’s more that if you ARE smart and capable, focus on something that provides more value, which leads to…Reason ThreeThe higher order version of choosing “what you build vs. what you buy” and where you draw the commodity line is the next layer up in the stack: turnkey IaaS /PaaS (on and off premises, and in every form of capex/opex economic model) and SaaS models.Turnkey IaaS and PaaS are not actually turnkey yet. There is a window to make it that way – and EMC, VMware, Pivotal are in a better position than anyone to do that. It requires a pivot in some strategic thinking and posture – becoming more opinionated (while still always offering choice – that’s a brand promise). You can see the industry running to this point (see the EMC Federation Enterprise Hybrid Cloud, Cisco Metapod, Mantl, IBM’s bluebox efforts, and Azure on-prem work).None of us have nailed it yet – but we will.If you agree with point #1 (the commoditization of components)…Then the center of the infrastructure universe is at the same place as the center of the universe for making storage/networking/compute convergence the new “commodity” domain. That’s at VCE within EMC. This strikes me as the center of that new entity.If you agree with point #2 (the new base commodity layer is converged/hyper-converged)…Then you want to be at the place that is the clear leader in converged infrastructure. That’s clearly VCE.    VCE exits FY15 on an even higher run rate than the $2 billion+ we previously disclosed, demonstrating the resonance of the VCE value proposition and the tremendous growth potential of CI.For perspective, one of the other leaders in this space made their current run-rate public in an S-1 filing.   It highlights a 4x delta. It also makes clear why everyone would target VCE as a leader in this category and that’s OK (competition is good for everyone)!Today, VCE is synonymous with Vblock, which is in the “Block” category of CI. Blocks are an important category and will continue to grow as the best way to support workloads in certain scales, industries and use cases. Vblock is an unquestionable leader in this part of the market – and in 2016 we will DOUBLE DOWN on Vblock with our partner Cisco.If you look at the incredible success VCE had in 2015 – a huge portion of that $2B bookings run rate is directly attributable to the success of Vblock.  That formula is a winning formula we have developed and nurtured in partnership with Cisco over 6 years.  It is a strong partnership – and one we will be doubling down and investing to grow.  Vscale, the Vblock (in particular the VB500), as well as expansion/upgrades/refreshes into the enormous Vblock installed base – these are all areas that benefit EMC, Cisco, and most importantly – the customer.While Racks and Appliances represent critical new growth engines to the Converged Infrastructure business – let there be no doubt – our partnership with Cisco is central to our Converged Infrastructure plan.BUT hyper-converged models are an area of massive growth. VCE will not rest on the laurels of success in the Block category, rather we will disrupt ourselves to become a leader in hyper-converged Rack scale and Appliance forms of CI. These are different in a ton of ways than the well-established Block CI model (operationally, economically, technologically, and as a business model).Can we do it? Only results speak. Self-disruption is something that needs to be a core competency. VCE has it. We absolutely can be the leader in all three forms of CI (Blocks, Racks, and Appliances). Customers want partners who are more than a one-trick pony and offer a CI portfolio. We plan on doing it and will do it. Period.If you agree with point #3 (turnkey IaaS/PaaS/Data Fabrics are the “emerging commodity layer”)…Then you would want to be in the place where “turnkey buy vs. build” moves to the next level, where you could take the value of an engineered system and make it include engineered solutions.Converged and hyper-converged infrastructure are simply a means to an end for customers. They dream of a turnkey IaaS/PaaS/Data stack.The team that builds solutions like the Federation Enterprise Hybrid Cloud stack, the Federation Business Data Lake, and what we will soon reveal as our solution for new Cloud Native app development – they are part of the same team building converged infrastructure.We can move commoditization further up the stack. We can aggregate, industrialize, and curate the technologies of EMC, VMware and Pivotal. 2015 had a lot of great solutions work – but it still hasn’t been turnkey enough, curated enough. We can do that. It will take a little longer – but that’s the big opportunity.I can even dream of a day where all IT is consumed as a Platform.   VCE is now the Converged Platforms  Division of EMCOur mission is simple:Shift customers upward towards “buy vs. build.” Focus on delivering the business outcome – the cake, not the ingredients.Broaden the domain of Converged Infrastructure with a portfolio of Blocks, Racks, and Appliances.Raise the bar on defining new commodity into the IaaS/PaaS/Data Fabric domain. Bring the simplicity of public cloud IaaS/PaaS models to cases where the right answer is on-premises.Help power Virtustream and other parts of the Federation that deliver those elements as managed services and public cloud offers.Now, in addition to leading VCE, I continue to lead the EMC Global Systems Engineering community.   All of my almost brothers and sisters in the EMC SE team will now be able to tap into the resources of VCE and vice versa. We can get the same 1+1=3 with the sales teams, the engineering teams, the customer service and professional services teams. That is powerful!VCE as the Converged Platform Division is at the center of EMC’s business strategy. Converged Platforms – both Infrastructure (Blocks, Racks, Appliances) and Solutions (IaaS, PaaS, Data Fabrics) are at the center of what customers want – and represent a path towards a simplified and accelerated IT world.It’s an incredible team, an incredible opportunity – and 2016 will be an AWESOME year! I’m PUMPED!Forward-Looking Statement LegendThis release contains “forward-looking statements” as defined under the Federal Securities Laws.  Actual results could differ materially from those projected in the forward-looking statements as a result of certain risk factors, including but not limited to: (i) risks associated with the proposed acquisition of EMC by Denali Holdings, Inc., the parent company of Dell, Inc., including, among others, assumptions related to the ability to close the acquisition, the expected closing date and its anticipated costs and benefits; (ii) adverse changes in general economic or market conditions; (iii) delays or reductions in information technology spending; (iv) the relative and varying rates of product price and component cost declines and the volume and mixture of product and services revenues; (v) competitive factors, including but not limited to pricing pressures and new product introductions; (vi) component and product quality and availability; (vii) fluctuations in VMware, Inc.’s operating results and risks associated with trading of VMware stock; (viii) the transition to new products, the uncertainty of customer acceptance of new product offerings and rapid technological and market change; (ix) risks associated with managing the growth of our business, including risks associated with acquisitions and investments and the challenges and costs of integration, restructuring and achieving anticipated synergies; (x) the ability to attract and retain highly qualified employees; (xi) insufficient, excess or obsolete inventory; (xii) fluctuating currency exchange rates; (xiii) threats and other disruptions to our secure data centers or networks; (xiv) our ability to protect our proprietary technology; (xv) war or acts of terrorism; and (xvi) other one-time events and other important factors disclosed previously and from time to time in EMC’s filings with the U.S. Securities and Exchange Commission.  EMC disclaims any obligation to update any such forward-looking statements after the date of this release.last_img read more

Episode #50: Cloud-Enabled Microsoft Applications with Paul Galjan

first_imgMoving Microsoft applications to private, public, or hybrid cloud configurations requires experienced architects, best practices, and tools to ensure success.  I sat down with Paul Galjan (@PaulGaljan), functional lead for EMC Microsoft Technologies.  We talked about Clouds, EMC Unity, Microsoft SQL Server 2016, Open Source, Exchange, SharePoint, Azure, Lynx… and that’s in the first few minutes.Don’t miss “EMC The Source” app in the App Store. Be sure to subscribe to The Source Podcast on iTunes, Stitcher Radio or Google Play and visit the official blog at thesourceblog.emc.comThe Source Podcast: Episode #50: Cloud-Enabled Microsoft Applications with Paul GaljanAudio Player Up/Down Arrow keys to increase or decrease volume.EMC: The Source Podcast is hosted By Sam Marraccini (@SamMarraccini)last_img read more

#69: Dell EMC World Austin Data Protection Update

first_imgDell EMC World Austin provided the perfect opportunity to announce the latest enhancements to the Dell EMC market-leading portfolio of data protection software. Data domain Virtual Edition 3.0 along with Enhanced Cloud data protection and manageability and Prosupport One for Data Center highlighted the latest announcements.Alex Almeida (@alxjalmeida), Manager, Technical Marketing for Data protection reviews the high-level announcements this week on Dell EMC The Source Podcast.  For the latest Dell EMC Data Protection announcements be sure to follow @DellEMCProtectDidn’t get a chance to visit Austin? You can check out all the keynotes and select breakouts sessions in the “Live” library hereDon’t forget to mark your calendars for Dell EMC World Las Vegas, May 8th – 11tt, 2016 at The Las Vegas Venetian.The Source Podcast: Episode #69: Dell EMC World Austin Data Protection UpdateAudio Player Up/Down Arrow keys to increase or decrease volume. Don’t miss “Dell EMC The Source” app in the App Store. Be sure to subscribe to Dell EMC The Source Podcast on iTunes, Stitcher Radio or Google Play and visit the official blog at thesourceblog.emc.comEMC: The Source Podcast is hosted By Sam Marraccini (@SamMarraccini)last_img read more

Thinking Outside the Box: Extending Converged Infrastructure Across Networks

first_imgAs I get ready to head to Cisco Live in Berlin I’ve been giving a lot of thought to IT systems.  All IT systems and their components have limits, boundaries of all sorts: technology upgrade options, maximum scalability, ease of reconfiguration and openness to integrate with multi-vendor systems, to name a few.Converged infrastructure systems (i.e., compute, storage, network and virtualized components engineered and manufactured together as one product) have limits too, albeit highly scaled limits.For example, Dell EMC Vblock or VxBlock Systems can scale from 2 to 256 compute blades, 4 to 11,264 cores, 2 to 10 raw Petabytes.  Limits yes, but that’s more than enough to handle many enterprises’ entire mix of data center workloads.  And these converged systems have the flexibility to mix and match technologies (e.g., different types of storage and compute devices) through side-car cabinets called Converged Technology Extensions.But what’s next?What if you want to scale more?What if you want to share resources across converged systems?What if you want to share converged resources with legacy, non-converged systems – or vice versa?How do you go beyond the boundaries of converged systems, even if they can scale to 10 Petabytes and thousands of CPU cores?Those are challenges that our customers would like to see Dell EMC solve.And to solve them, the company literally thought “outside the box” – and created the Vscale Architecture to converge resources across networks.Since March 2015, when VCE (now Dell EMC’s Converged Platforms & Solutions Division) first announced Vscale Architecture as a strategy, the company has been quietly deploying it across a range of industry sectors: government, transportation, finance, retail, manufacturing, healthcare research, and more.Here’s an illustration of one such deployment scenario:This architecture extends the benefits of convergence — operational simplicity, lower risk and faster time to market for new services — to the entire data center and across distributed data centers.Over the next few months, I’ll be blogging and writing new literature about the architecture, its core components (such as pre-engineered Cisco spine and leaf networks with Cisco automation and software defined networking), and its real-world deployments.In the meantime, see the data sheet posted here to get a first, high-level description of the architecture and its various components.last_img read more

Dell Joins Cross-Industry Coalition to Advance Women of Color in Tech

first_imgThis type of cross-industry collaboration is critical for our business success. Firstly, the technology sector is outgrowing our potential talent pool. Consider the numbers – there are 1.1 million computing-related job openings in U.S. expected by 2024. Yet only 45% of these jobs could be filled by U.S. students graduating with a computing bachelor’s degree by 2024.  Working together we can pool resources to invest in the organizations we all agree have the greatest specialty in the cultural, social and economic challenges at play.Secondly, diverse perspectives drive innovation. If you believe, as we do, in the critical role that innovative technology plays in transforming our world, we all benefit from broad participation in the technology workforce.Finally, every company will eventually become a tech company. Creating a tech talent pool doesn’t just help us, but it helps our customers too. These are the jobs of the future. Our Chairman and CEO, Michael Dell, describes our responsibility perfectly:“Technology continues to transform our world in unprecedented ways. Now more than ever, it’s imperative that we have diverse perspectives helping to shape our collective future, and that means as an industry we have a responsibility to address skills gaps and break down barriers to participation.”Philanthropic dollars are one of many ways Dell Technologies is putting its weight behind addressing the tech skills gap for women of color and other underrepresented groups.  Last month, we introduced the Dell ReStart program, which supports candidates eager to rejoin the workforce after stepping away from their careers. We recently joined Northeastern University’s ALIGN program, which provides a direct path for women and under-represented minorities to a Master’s of Science in Data Science, Computer Science, or Cyber Security. And today we pledged to join the HBCU Partnership Challenge,  a commitment to create strategic partnerships with Historically Black Colleges and Universities (HBCU) to develop top under-represented minority talent.Our global teams are passionate about engaging in support of our future workforce. This month’s employee volunteer focus is around inspiring our future workforce through 1:1 mentorship. Learn more about the importance of mentorship in the below video featuring 15-year old STEM advocate, author and student, Quinn Langford. effort is a journey. We know we have more to do, but we believe that through commitment and collaboration, we may have a real shot at changing the numbers and bringing new and innovative perspectives to the tech table. Today, Pivotal Ventures, an investment and incubation company created by Melinda Gates and McKinsey & Company released new research on closing the gender gap in tech. The report analyzes philanthropic contributions from 32 leading tech companies. It reveals that only 5% of company’s philanthropy goes toward gender diversity in tech and even less (0.1%) goes toward women of color.The low investment has gotten the tech industry’s attention, and we’re responding by joining forces in a new initiative launched by Pivotal Ventures called the Reboot Representation Tech Coalition. The collaboration seeks to align philanthropic donations and to increase funding with the ultimate goal of doubling the number of underrepresented women of color graduating with computing degrees by 2025.Dell Technologies joins this effort as a founding member alongside Intel, Microsoft, Adobe, and Oath. Collectively, 12 tech companies have committed more than $12 million to this goal, which represents a 30x increase in funding.last_img read more

Know Your Role: Getting It Right in the Cloud

first_imgIt’s hard to remember now, but attitudes to the public cloud have changed massively in a short period of time. When public cloud services first become available around 2006, most organizations were understandably skeptical. The idea of storing data in a remote location made them uncomfortable. They worried about reliability, security, and the loss of direct control over their applications and data.More than a decade later, customers have come to embrace public cloud. It has moved from a bleeding-edge technology to a fundamental component of nearly every large organization’s IT strategy. These days, organizations are, if anything, too ready to adopt the cloud without careful planning. They don’t always realize that, when it comes to public cloud deployments, the devil is in the details. It is easy to underestimate the amount of time and effort that is still required to optimize and manage their environment.Start by understanding your responsibilitiesWhen it comes to cloud roles and responsibilities are often not clear to new cloud users. Many customers have a fundamental misunderstanding of who owns what in the public cloud. They either haven’t taken time to understand their responsibilities in detail, or they assume that their cloud provider will handle them. This is incredibly common and often leads to serious complications. This gap in understanding and knowledge is the hidden reason why many cloud deployments fail.Every public cloud provider offers a “shared responsibility model,” a breakdown of what customers must cover and what is provided by their own services. In my conversations with firms that are already in the public cloud, I’ve often found many are unaware of these shared responsibility models. And even more don’t take the time to understand them fully and their implications.These models vary a bit from provider to provider, but usually look something like the graphic below.Sample Shared Responsibility ModelWhile the major public cloud providers offer advanced and proven infrastructure, the customer carries the burden of configuring and incorporating their solutions to fit their own environment. Often, cloud services require customers to take on significant management activities. Sometimes this flies in the face of expectations with what organizations would expect when buying “as a Service.”This can get complicated fast, particularly for less technical customers or those lacking a strong overall plan. After all, very few companies go to the cloud with a clear, centralized strategy owned by a single entity. Most organizations have many points of adoption, with individual business units or even small teams adopting cloud-based infrastructure and services, often in very different ways and for very different purposes.Adding greatly to the confusion is the reality that 93% of customers1 are deployed to multiple clouds. This means they must understand, and act on, multiple shared responsibility models, as well as support divergent operational requirements and control layers.When you consider these factors, it makes sense that many customers have big gaps in their execution and management approaches caused directly by a failure to understand their responsibilities. Let’s examine the most important and common areas where organizations get into trouble.InfrastructureWhen you deploy your applications on any IaaS offering, you are paying for bare-bones compute, storage, and network access. The way that these resources are configured is your responsibility. So, you carry the burden of architecting a network topology that accounts for routine security challenges such as performing operating system updates and setting up your firewall.The key problem here is misconfiguration. If you don’t get your firewall set correctly your data may be wide open to the internet. If you don’t structure your cloud services properly, you may introduce business risk from potential downtime or slowdowns. Many customers make simple and avoidable setup errors, such as not running across multiple availability zones, or failing to tap into the structure of the cloud to provide resiliency. Once deployed, they may not be monitoring the performance of their workloads in the cloud, assuming that the cloud provider would signal them if any issues arise.The bottom line: while cloud extends your infrastructure, it also extends the breadth and range of your configuration and management responsibilities. Getting those right can make a huge difference to lowering your business risk and increasing your efficiency.Security and EncryptionWhen a single incident can create permanent harm to your customers or your reputation, you cannot afford to get your security wrong. The potential costs of a breach or failure include direct expenses from downtime and long-term penalties from regulatory punishments and diminished customer trust. Yet enterprises often have glaring blind spots when it comes to their security profile across clouds.One common issue is encryption. When configured correctly, it should apply to data across all its states – at rest, in use, and in transit. Most customers know to encrypt their data when it is static, on the client side. But surprisingly often, they will allow open data to move across their network and hit their servers. This is usually because they expected that the cloud provider would secure it, misreading their responsibilities and introducing massive business risk.Another security challenge that is often overlooked is threat detection and response. Many organizations think their cloud provider “owns” this, but it actually falls to the customer. This means taking care of your own network monitoring, tracking threats and analyzing logs. It is up to you to scan proactively for vulnerabilities, or precursor activities like port scanning or brute force attacks, to stop incidents before they happen.Application ServicesTo make your business and people effective, you first need to provide your users with applications that work. Setting up your applications services correctly is essential to enable your stakeholders to work reliably and at scale. In the cloud, this burden falls mostly on customers.You must determine your own identity and access management profile. It is up to you to find the delicate balance between being too open, introducing risk, and too restrictive, sapping productivity and efficiency.You are also responsible for designing your platform to withstand intense, challenging service levels. Creating a resilient platform that can scale is not always easy. This is a pervasive and expensive problem; according to a recent analysis by Dell EMC and VansonBourne, 41% of enterprises have suffered a downtime event in the last 12 months.DataOne of the most important shifts that companies can make is to go from a cloud-first to a data-first mentality. When your cloud dictates what you can do with your data, you are limiting your data capital. Therefore, it is critical to understand what cloud providers do and do not provide in terms of data management and protection.The inability to move data quickly to its most suitable cloud environment is one of the most common challenges I hear from customers. As business requirements, SLAs, IT budgets, and other factors change, customers need the ability to move data — both within a single provider’s infrastructure and across platforms — with minimal friction. Do not expect to inherit easy tools from your cloud provider to do this as its quite a complex process spanning multiple clouds, and few enable it.Another major customer priority should be in the area of data recovery. Cloud storage services come with some level of redundancy, which provides durability for your cloud data in the event of a systems failure. Do not, however, durability with availability. According to the same VansonBourne study, 63% of organizations doubt their ability to recover quickly from a downtime event.Unless you take the time to implement a backup and recovery strategy that is aligned with your SLAs, you will likely be waiting to access critical data if cloud infrastructure goes down. To guard against ransomware threats and possible data corruption, you need backups that are both high-quality and readily available.ConclusionAll of these areas are potential landmines that can undermine or derail your cloud strategy. If any of these apply to your organization, you are far from alone – most customers carry at least some vulnerability due to misconfiguration or oversights in the cloud. If not properly addressed, these gaps in coverage between your company and your cloud providers can become serious issues.The reality is that setting up and maintaining multiple clouds gets very complicated, very fast. When business units and their developer teams are in the driving seat, they typically lack the expertise and knowledge to solve all of these issues. Cloud providers have worked hard to make their platforms easy to adopt and consume, pushing much of the complexity and thorny configuration issues behind the scenes. Therefore, it is critical that IT takes a strong role in managing and overseeing cloud deployments across the enterprise.Ultimately, companies need a multi-cloud strategy and operational approach that goes beyond what the market has given them so far. Today, solving for these responsibility gaps falls disproportionately on the shoulders of customers. They need a way to automate administration and reduce complexity, ideally by managing their entire cloud presence using a single interface.This is why we created Dell Technologies Cloud. We assessed the pain points that customers have experienced as they have gone all-in on cloud and built an offering that would make things easier for them. While cloud providers offer customers robust infrastructure platforms upon which to build their businesses, enterprises need to make sure they understand and have a plan for filling the gaps.If you are looking to leverage the power of the cloud, but rein in some of the chaos that it has caused you so far, Dell Technologies Cloud can help you get started. 1IDC White Paper, sponsored by Cisco, Adopting Multicloud — A Fact-Based Blueprint for Reducing Enterprise Business Risks, June 2018last_img read more

2 more Indiana men face federal charges in Capitol riot

first_imgINDIANAPOLIS (AP) — Two central Indiana men face federal charges stemming from the deadly Jan. 6 riot at the U.S. Capitol building. Federal authorities say in a criminal complaint filed in U.S. District Court for the District of Columbia that photographs show Israel Tutrow of Greenfield and Joshua Wagner of Greenwood were inside the Capitol that day while Congress met to certify results from the presidential election. They charges they face include disorderly conduct which impedes the conduct of government business and parading, demonstrating or picketing in the Capitol Buildings. Wagner surrendered Tuesday. A warrant has been issued for Tutrow’s arrest.last_img read more