Quantcast
Channel: VMware Blogs
Viewing all 50538 articles
Browse latest View live

VMware vSAN Pass-Through vs RAID0 Storage Controller

$
0
0

This post is a reflection post on possibilities on different scenarios whether yoursquo;re using a VMware vSAN certified storage controller supporting pass-through (or also IT Mode or JBOD mode… all this is the same name) or yoursquo;re using a storage controller which uses RAID0 only. I said only as it means that it is limited […]

Read the full post VMware vSAN Pass-Through vs RAID0 Storage Controller at ESX Virtualization.


Using AzureRM and Rubrik PowerShell Modules to Consume Azure Blob Storage

$
0
0

It’s no secret that I enjoy tinkering around with PowerShell to automate “all the things” and generally make life easier for those in the community. Duringthe 5th Annual PowerShell and DevOps Global Summit (that’s a mouthful, eh?) earlier this year, I was introduced to theAzureRM module for PowerShell and knew that I wanted to fire it up and begin learning Azure at a deeper level. And sinceRubrik’s Cloud Data Management platform has supported Azure blob storage as an archive target for some time now, it seemed like the most logical place to start.

Yo dawg, I heard you liked PowerShell …

In this post, I’ll cover using the Microsoft Azure PowerShell modules to authenticate to Azure; create a resource group, storage account, and storage container; and connect the newly created container to Rubrik for use as an archive location.

Installation and Authentication

The GitHub page for Azure PowerShell has all of the details for installation. I ended up using Install-Module -Name AzureRM -Scope CurrentUserto deploy the modules into my OneDrive folder from the PowerShell Gallery. For me, that location is C:\Users\chris\OneDrive\Documents\WindowsPowerShell\Modules. This is a simple way to sync modules across devices.

PS> Get-Module -ListAvailable    Directory: C:\Users\chris\OneDrive\Documents\WindowsPowerShell\ModulesModuleType Version    Name                                ExportedCommands---------- -------    ----                                ----------------Script     0.2.0      Azure.AnalysisServices              {Add-AzureAnalysisServicesAccount, Restart-AzureAnalysisSe...Script     2.8.0      Azure.Storage                       {Get-AzureStorageTable, New-AzureStorageTableSASToken, New...Script     3.8.0      AzureRM                             {Update-AzureRM, Import-AzureRM, Uninstall-AzureRM, Instal...Script     0.2.0      AzureRM.AnalysisServices            {Resume-AzureRmAnalysisServicesServer, Suspend-AzureRmAnal...Script     3.6.0      AzureRM.ApiManagement               {Add-AzureRmApiManagementRegion, Get-AzureRmApiManagementS...Script     2.8.0      AzureRM.Automation                  {Get-AzureRMAutomationHybridWorkerGroup, Get-AzureRmAutoma...Script     2.8.0      AzureRM.Backup                      {Backup-AzureRmBackupItem, Enable-AzureRmBackupContainerRe...Script     2.8.0      AzureRM.Batch                       {Remove-AzureRmBatchAccount, Get-AzureRmBatchAccount, Get-...Script     0.11.0     AzureRM.Billing                     Get-AzureRmBillingInvoiceScript     2.8.0      AzureRM.Cdn                         {Get-AzureRmCdnProfile, Get-AzureRmCdnProfileSsoUrl, New-A...Script     0.6.0      AzureRM.CognitiveServices           {Get-AzureRmCognitiveServicesAccount, Get-AzureRmCognitive...Script     2.9.0      AzureRM.Compute                     {Remove-AzureRmAvailabilitySet, Get-AzureRmAvailabilitySet...Script     2.8.0      AzureRM.DataFactories               {Remove-AzureRmDataFactory, Get-AzureRmDataFactoryRun, Get...Script     2.8.0      AzureRM.DataLakeAnalytics           {Get-AzureRmDataLakeAnalyticsDataSource, New-AzureRmDataLa...Script     3.6.0      AzureRM.DataLakeStore               {Get-AzureRmDataLakeStoreTrustedIdProvider, Remove-AzureRm...Script     2.8.0      AzureRM.DevTestLabs                 {Get-AzureRmDtlAllowedVMSizesPolicy, Get-AzureRmDtlAutoShu...Script     2.8.0      AzureRM.Dns                         {Get-AzureRmDnsRecordSet, New-AzureRmDnsRecordConfig, Remo...Script     0.2.0      AzureRM.EventHub                    {New-AzureRmEventHubKey, Get-AzureRmEventHubNamespace, Get...Script     2.8.0      AzureRM.HDInsight                   {Get-AzureRmHDInsightJob, New-AzureRmHDInsightSqoopJobDefi...Script     2.8.0      AzureRM.Insights                    {Get-AzureRmUsage, Get-AzureRmMetricDefinition, Get-AzureR...Script     1.4.0      AzureRM.IotHub                      {Add-AzureRmIotHubKey, Get-AzureRmIotHubEventHubConsumerGr...Script     2.8.0      AzureRM.KeyVault                    {Add-AzureKeyVaultCertificate, Set-AzureKeyVaultCertificat...

Once the modules are installed – and there are quite a few of them – it’s time to authenticate. The basic command is simply Add-AzureRmAccountwhich has an alias of Login-AzureRmAccountif you prefer. I also suggest using the SubscriptionNameparameter if you have multiple subscriptions beyond Pay-As-You-Go.

$subscriptionName = 'Visual Studio Enterprise'Add-AzureRmAccount -SubscriptionName $subscriptionName

This brings up an interactive login prompt that is only needed once for the session. To see connection details you’ll need to pull up the current context using Get-AzureRmContext. You can then see the account and subscription details. In my case, I’m using my monthly Visual Studio Enterprise credits.

PS> Get-AzureRmContextEnvironment           : AzureCloudAccount               : email@example.comTenantId              : 1234567890SubscriptionId        : abcdefg123SubscriptionName      : Visual Studio EnterpriseCurrentStorageAccount :

For scripting purposes I’ve created a simple try/catch logic statement that uses Get-AzureRmContextto determine if you’re already connected. If that fails, the catch portion will execute and ask for login credentials. Details on the intended subscription are then pulled. This ensures that the correct context and subscription are selected when performing work in Azure. Otherwise, I might find myself owing actual money instead of using credits.

try {  Get-AzureRmContext  $subscriptionDetail = Get-AzureRmSubscription -SubscriptionName $subscriptionName -ErrorAction Stop}catch{  if ($_.Exception -match 'Run Login-AzureRmAccount to login')  {    Write-Warning -Message 'No session detected. Prompting for login.'    $subscriptionDetail = Add-AzureRmAccount -SubscriptionName $subscriptionName -ErrorAction Stop  }  else   {    throw $_  }}

At this point I have a valid connection to Azure and am ready to start building.

Creating a Resource Group

Before building anything in Azure I need to make sure that I have a resource group. This is a high level hierarchy item that logically groups together objects such as virtual machines, network interfaces, storage accounts, and so forth. Making one is really simple: you just have to supply a name and a location.

$resourceGroup = 'wahlresgroup'$resourceGroupLocation = 'westus'New-AzureRmResourceGroup -Name $resourceGroup -Location $resourceGroupLocation

And … that’s it. A new resource group exists. Because I live in California, I chose the West US region which is expressed as westus.

For scripting purposes, I like to first check to see if the resource group exists before making a new one. Here’s an example:

try {  $resourceGroupDetail = Get-AzureRmResourceGroup -Name $resourceGroup -ErrorAction Stop}catch{  if ($_.Exception -match 'Provided resource group does not exist')  {    Write-Warning -Message "Provided resource group does not exist. Creating $resourceGroup in $resourceGroupLocation."    $resourceGroupDetail = New-AzureRmResourceGroup -Name $resourceGroup -Location $resourceGroupLocation -ErrorAction Stop  }  else   {    throw $_  }}

This will make sure that $resourceGroupDetailwill be populated with information no matter if a new or existing resource group is used. And if the error doesn’t match the generic “does not exist” sort of message, the script halts and throws the error. Pretty? Not really. But it works.

Creating a Storage Account

The storage account is used to control the containers or “buckets” if you will, including the storage type, tier, resiliency, and other factors. Creating a storage account is dependent on having a resource group – because everything has to live within a resource group – hence why the resource group was created first.

Making a storage account is more complex than a resource group. There are several optional parameters. Here’s one example:

$storageAccount = 'wahlstorageaccount'$storageKind = 'BlobStorage'$storageTier = 'Hot'$storageSkuName = 'Standard_RAGRS'New-AzureRmStorageAccount -ResourceGroupName $resourceGroup -Name $storageAccount -Kind $storageKind -AccessTier $storageTier -SkuName $storageSkuName -Location $resourceGroupLocation

While several of the parameters are user defined, there are three – the kind, tier, and SKU for the storage – that are required by Rubrik. This results in creating a storage account that will hold containers that are blob storage (instead of block or file) using the “hot” tier (data is accessed frequently because Rubrik is managing the data) in the standard read-access geo-redundant storage format (immutable data that is replicated to two other regions for high availability). The remaining variables are based on whatever you wish to name things.

At this point we have a Resource Group with one Storage Account as a member

The full script segment looks like this:

try {  $storageAccountDetail = Get-AzureRmStorageAccount -ResourceGroupName $resourceGroup -Name $storageAccount -ErrorAction Stop}catch{  if ($_.Exception -match 'was not found')  {    Write-Warning -Message "Provided storage account does not exist. Creating $storageAccount."    $storageAccountDetail = New-AzureRmStorageAccount -ResourceGroupName $resourceGroup -Name $storageAccount -Kind $storageKind -AccessTier $storageTier -SkuName $storageSkuName -Location $resourceGroupLocation -ErrorAction Stop    $storageAccountKey = ($storageAccountDetail | Get-AzureRmStorageAccountKey -ErrorAction Stop)[0].Value  }  else   {    throw $_  }}

Note that there’s also a bit of code that snags one of the storage account keys and saves it to $storageAccountKey. This is because any new storage account is assigned two keys to be rotated at will. I snag the first key to be used by Rubrik to access the storage account. This could be altered if desired.

Creating a Storage Container

The final step in Azure is to build a storage container. It is the last piece of the cloudy puzzle – huzzah!

To do this, we’ll need a private container created in thestorage account’s current context. Using the New-AzureStorageContainercmdlet with the permission and context parameters does the trick. Note that $storageAccountDetail was populated in the previous section and is being re-used here to provide the contextual details on the storage account in one parameter. Handy, yes?

$storageContainer = 'wahlcontainer'New-AzureStorageContainer -Name $storageContainer -Permission Off -Context $storageAccountDetail.Context

A ready-to-use Storage Container

For scripting purposes, I have used a try/catch segment to … well, you know the drill by now, right?

try {  $storageContainerDetail = Get-AzureStorageContainer -Context $storageAccountDetail.Context -Name $storageContainer -ErrorAction Stop}catch{  if ($_.Exception -match 'Can not find the container')  {    Write-Warning -Message "Provided storage container does not exist. Creating $storageContainer."    $storageContainerDetail = New-AzureStorageContainer -Name $storageContainer -Permission Off -Context $storageAccountDetail.Context -ErrorAction Stop  }  else   {    throw $_  }}

We now have validated that the required resource group, storage account, and storage container exist and are ready to be plugged into Rubrik as an archive location.

Adding an Archive Location to Rubrik

The only step needed for Rubrik is to take details from the Azure pieces and feed them into the Rubrik RESTful API. To do this, I’m leveraging the Rubrik PowerShell module to establish a connection to the distributed cluster and then making one API call.

There is a little bit of pre-work required. As detailed in the user guide, you’ll need to generate your own 2048 bit RSA key for encrypting the data. This way you’ll own the encryption key and no one else has any idea what it is. Being the lazy person that I am, I just rely on a tiny Ubuntu VM running on Azure to execute this openssl command:

openssl genrsa -out rubrik_encryption_key.pem 2048

I then save that key to a safe location and reference it in the script. I useTest-Pathto ensure that I didn’t fat finger the path and Out-Stringto concatenate the file contents into a single string.

$rsaPrivateKeyPath = 'C:\Secure\rubrik_encryption_key.pem'if (Test-Path -Path $rsaPrivateKeyPath){  $rsaPrivateKeyDetail = Get-Content -Path $rsaPrivateKeyPath | Out-String}else{  throw 'Invalid RSA private key path entered.'}

With that out of the way, we can ask Rubrik’s distributed task scheduler to attach the new Azure blob storage as an archive location. The accessKey, name, bucket, andsecretKeyvalues are all derived from earlier variables populated during the Azure segments, while thepemFileContent value was from the code snipet directly above this. TheobjectStorageTypeis always going to be Azure for this particular script.

$rubrikIp = '172.17.28.11'Connect-Rubrik -Server $rubrikIp$body = @{  accessKey       = $storageAccount  name            = "Azure:$storageContainer"  bucket          = $storageContainer  objectStoreType = 'Azure'  secretKey       = $storageAccountKey  pemFileContent  = $rsaPrivateKeyDetail}$r = Invoke-WebRequest -Uri "https://$($rubrikConnection.server)/api/internal/data_location/cloud" -Method Post -Headers $rubrikConnection.header -Body ($body | ConvertTo-Json)return ConvertFrom-Json $r.Content

Thoughts

While my original script was just a few lines to prove that this could be done, I ended up wrapping everything with try/catch code to make it easier to re-use existing resource group, storage account, and storage container details. I also like the idea of having some level of error handling. Here’s what it looks like when the entire script is run from start to finish.

PS> .\New-RubrikAzureContainer.ps1WARNING: No session detected. Prompting for login.WARNING: Provided resource group does not exist. Creating wahlresgroup in westus.WARNING: Provided storage account does not exist. Creating wahlstorageaccount.WARNING: Provided storage container does not exist. Creating wahlcontainer.WARNING: You did not submit a username, password, or credentials.Name                           Value----                           -----api                            v1server                         172.17.28.11header                         {Authorization}userId                         1234567890time                           5/8/2017 13:56:27jobInstanceId : ADD_ARCHIVAL_DATA_LOCATION_1234567890

The entire process took about 30 seconds. I then requested an on-demand backup of a small fileset just to validate that I had done everything correctly.

The results?2397 files worth of user directory data that used 256.5 MB on the source has been stored in the Azure blog storage container using 27 MB of space.

I hope you’ve enjoyed this post and are able to start using the AzureRM modules for fun and profit!

The post Using AzureRM and Rubrik PowerShell Modules to Consume Azure Blob Storage appeared first on Wahl Network.

macos + Log Insight

$
0
0

I recently had an issue with my Macbook Pro and used Log Insight to track down the issue. In the process I realized I have not blogged about how to configure macos to log to Log Insight. In this post, I will cover the steps. Read on to learn more! Log Insight supports macos?! The […]

The post macos + Log Insight appeared first on SFlanders.net by Steve Flanders.

New SIOS Offerings in AWS Quick Start and AWS Marketplace Enable Accelerated Delivery of SQL Server High Availability Clusters in the Cloud SIOS

$
0
0
To simplify and accelerate the deployment of high availability SQL Server clusters in the cloud, SIOS Technology Corp . today announced that its SIOS Read more at VMblog.com.

Updated Nested ESXi 6.0u3 & 6.5d Virtual Appliances

$
0
0
I finally found a bit of "extra" spare time to update my Nested ESXi Virtual Appliancesto support some of the recent releases of ESXi,6.0 Update 3 and 6.5d, which enables customersto easily and quickly deploy vSAN 6.6 in their environment for testing, development or learning purposes. If you have not used this appliance before, please […]]> http://www.virtuallyghetto.com/2017/05/updated-nested-esxi-6-0u3-6-5d-virtual-appliances.html/feed 2 22887 Auditing & Automating Disabled Protocols (TLS/SSLv3) for ESXi 6.0u3 using PowerCLI http://www.virtuallyghetto.com/2017/05/auditing-automating-disabled-protocols-tlssslv3-for-esxi-6-0u3-using-powercli.html http://www.virtuallyghetto.com/2017/05/auditing-automating-disabled-protocols-tlssslv3-for-esxi-6-0u3-using-powercli.html#comments Tue, 09 May 2017 14:19:23 +0000

Tech Data Enhances Dell EMC HyperConverged Infrastructure Value with Cloud Services

$
0
0
Tech Data Corporation today announced that its Technology Solutions business is enhancing the value of Dell EMC VxRail Appliances and VxRack Systems... Read more at VMblog.com.

VMware Introduces Integrations with Dell EMC to Accelerate Workforce Transformation

$
0
0
VMware introduced new technology integrations with Dell that enhance VMware End-User Computing solutions to help customers realize the benefits of... Read more at VMblog.com.

VMware and Pivotal Extend Strategic Alliance to Integrate VMware NSX and Pivotal Cloud Foundry to Deliver New "Developer-Ready Infrastructure"

$
0
0
VMware, Inc. unveiled that the company is working with Pivotal to deliver "Developer-Ready Infrastructure." Highlighted today in a keynote address... Read more at VMblog.com.

IBM Extends Data Science Collaborative Workspace to the Private Cloud

$
0
0
IBM today announced the availability of a collaborative workspace for private clouds geared towards organizations and data scientists working with... Read more at VMblog.com.

Velostrata Names Jan Poczobutt Vice President of Sales, North America

$
0
0
Velostrata , a leader in cloud workload mobility, today announced Jan Poczobutt as vice president of sales for North America. Jan brings over two... Read more at VMblog.com.

VeloCloud SD-WAN Receives 2017 NFV Innovation Award and 2017 SDN Excellence Award

$
0
0
VeloCloud Networks, Inc., the Cloud-Delivered SD-WAN company , today announced that TMC, a global, integrated media company helping clients build... Read more at VMblog.com.

VMworld 2017 Oracle Customer Bootcamps

$
0
0

VMworld 2017 Oracle Customer Bootcamps

On a mission to arm yourself with the latest knowledge and skills needed to master application virtualization?

VMworld Customer bootcamps can get you in shape to lead the virtualization charge in your organization, with Instructor-led demos and In-depth course work designed to put you in the ranks of the IT elite.

Oracle on vSphere
The Oracle on VMware vSphere Bootcamp will provide the attendee the opportunity to learn the essential skills necessary to run Oracle implementations on VMware vSphere. The best practices and optimal approaches to deployment, operation and management of Oracle database and application software will be presented by VMware expert Sudhir Balasubramanian who will be joined by other VMware and Industry Experts.

This technical workshop will exceed the standard breakout session format by delivering rsquo;real-life,rdquo; instructor-led, live training and incorporating the recommended design and configuration practices for architecting Business Critical Databases on VMware vSphere infrastructure. Subjects such as Real Applications Clusters, Automatic Storage Management, vSAN and NSX will be covered in depth.

Learn More

https://www.vmworld.com/en/us/learning/sessions.html?mid=9592&eid=CVMW2000001358867&elqTrackId=ac4f78fd201d4b8ea8c06c94903ec64e&elq=a30d659ad2934a969e912b357d9624d2&elqaid=9592&elqat=1&elqCampaignId=4153

The post VMworld 2017 Oracle Customer Bootcamps appeared first on Virtualize Business Critical Applications.

Druva Announces Record Growth in Cloud Server Data Protection

$
0
0
Druva , the leader in cloud data protection and information management, today announced record growth across its server data protection business.... Read more at VMblog.com.

Dell EMC wprowadza Integrated Data Protection Appliance

$
0
0

Podczas konferencji Dell EMC World 2017 firma Dell EMC zaprezentowała nowe rozwiązania do tworzenia kopii zapasowych i zabezpieczania danych, zapewniające bezpieczeństwa danych, ochronę przed awariami i przerwami w pracy. Dell EMC Integrated Data Protection Appliance (IDPA) to nowe, dedykowane, zintegrowane rozwiązanie, które łączy oprogramowanie, funkcje ochrony pamięci masowych, wyszukiwania i analityczne w jednym urządzeniu, zapewnia ochronę danych w szerokim zakresie aplikacji i platform, oraz oferuje natywny cloud-tiering do długoterminowej retencji.

Microsoft Azure Cosmos DB: A Globally Distributed, Multi-Model Database Service

$
0
0

Microsoftrsquo;s Azure platform is growing in a big way with the company announcing a new DB service that they are calling Azure Cosmos. The new service is launching globally and unlike nearly every previous Microsoft product, this is not in a preview state; the product is now generally available.

This new DB service is designed for everything from IoT to AI to mobile with high levels of performance, fault tolerance and support for nearly every data type. The company claims that this is the first globally distributed, multi-model database service that provides horizontal scale with guaranteed uptime, throughput, and millisecond latency to the 99th percentile that is also backed by SLAs.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-1493843167317-0'); });

Cosmos DB is a schema-free database service that supports platforms like NoSQL APIsand is also capable of auto-indexing all of your data too. Because of this auto-indexing, queries can be performed faster and more accurately as you no longer have to overcome the restraints of complex schema and index management or schema migration in a globally distributed setup.

The goal of Cosmos DB is to allow developers to scale across a wide number of geographic regions with SLAs supporting uptime, performance, latency, and consistency. In short, you can launch an application or service nearly instantly with global support with extremely low latency in nearly any region of the world.

If you are curious about how Microsoft was able to launch this product, not in preview and already available globally, the product started out as Documents DB. I don’t beleive they are doing away with Documents DB service but are rebranding as Cosmos DB; this is the evolution of the underlying technology that powers that solution.

This product from Microsoft appears to be the companyrsquo;s response to Googlersquo;s Spanner technology. But, Microsoft is looking to take its product further than what Google offers with higher levels of financially backed global performance metrics as well as an assurance for their consistency too.

We are still in the early days to see how well Cosmos DB performs in the real world but Microsoft did say that customers like Jet.com are already using the technology and the platform is processing 100 trillion transactions per day.

Expect to hear a lot more about Cosmos DB at Build 2017 as Microsoft provides more information about how the service operates and the features (and limitations) of the platform.

 

The post Microsoft Azure Cosmos DB: A Globally Distributed, Multi-Model Database Service appeared first on Petri.


Microsoft Adds 440,000 Windows 10 Users Per Day During Last Seven Months

$
0
0

Microsoft has announced today at Build that the companyrsquo;s latest desktop operating system has reached 500 million monthly active users. This milestone comes about 650 days after the OS was released to consumers but the one question we donrsquo;t know is how many corporations are adopting the OS.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-1493843167317-0'); });

For comparison, Windows 7 reached 630 million installs in 983 days which equates to about 640,000 copies being sold every day whereas with Windows 10, the OS is running about 769,000 active users being added each day since its release. But, these numbers donrsquo;t tell the entire story and we can get a better adoption rate of Windows 10 since we have additional figures to utilize.

On September 26th, Microsoft told us that there were 400 million Windows 10 active usersand with todayrsquo;s announcement of 500 million active users, this is a span of 226 days since the two figures were announced. Using these numbers, Microsoft added 442,447 new users to Windows 10 during that time period.

Sure, the adoption rate has cooled from the initial release of the OS but thatrsquo;s expected as Windows 10 is no longer a spring chicken. That being said, the OS is still growing at a healthy rate but what I really want to know is at what rate are corporations adopting the OS.

Windows 7 is entering the later phases of its lifecycle which means large corporations need to start moving more aggressively to the OS as itrsquo;s not expected that Microsoft will extend the lifecycle of the software like they did with Windows XP.

If the 500 million figure is representative of large corporations moving to Windows 10, then this figure is not a rosy as it may seem. But, if large corporations have not yet adopted the OS and this install-base is largely made up of consumers, then Windows 10 will continue its strong growth for the foreseeable future.

The post Microsoft Adds 440,000 Windows 10 Users Per Day During Last Seven Months appeared first on Petri.

Microsoft’s New Database Migration Service Helps You Move Beyond Oracle

$
0
0

At some point in time, nearly every large company has likely used an Oracle product and while hardware is a bit easier to move away from, changing your software infrastructure can become a serious headache. Announced at Build today, Microsoft is looking to help Oracle (and other) customers come into the Microsoft fold with a new service called Database Migration Services.

The goal is to make moving from Oraclersquo;s infrastructure to Microsoftrsquo;s a much simpler task and the company is announcing an early preview of this new offering.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-1493843167317-0'); });

This is not the first time we have seen Microsoft go directly after Oracle. Last year, the company began offering a lsquo;freersquo; SQL Server 2016 license to those customers who are moving to the Microsoft world but this latest offering is a bit more aggressive.

Starting today, the service is in private preview and Microsoft is formally saying that the service is designed to helpmove existing competitive and SQL Server databases to Azure.

By making the process of migrating database platforms easier, this is Microsoftrsquo;s best-foot-forward for attracting new customers in this lucrative space. As Oracle has shown, once the customer builds a database around your infrastructure, it is a rarity that they will change services. Thus, Microsoft has built the tools needed to make the migration away from Oracle easier with the long-term goal of locking new customers into the companyrsquo;s software and cloud database services.

Details around this new service are still a bit light but Irsquo;ll keep poking around at Build to see what else the company is willing to share about this new service. For now, know that Microsoft is going hard after Oracle customers and they are trying to make the painful task of switching databases a little bit easier.

The post Microsoftrsquo;s New Database Migration Service Helps You Move Beyond Oracle appeared first on Petri.

Dell EMC World 2017: ScaleIO 3.0!

$
0
0

I want you, dear reader, to stop and pay close attention.  Irsquo;ve said it before, I will say it again:

SDS and HCI architectural approaches are now ready for the majority of x86 workloads.  

There is a sustained space for external purpose built storage platforms (think SAN, NAS, Object appliances) for certain workloads (low in count, but high in value) where there is a need for:

  • specific data services (can be anything – SRDF replication behaviors, inline dedupe that is a machine, protocol support) that are not available in the SDS/HCI worlds.
  • capacity densities that cannot be met with general-purpose x86 server building blocks.
  • latencies that are very VERY low (think consistently <1ms) or workloads that are very sensitive to latency jitter (think hundreds of microseconds) where distributed storage stacks that use ethernet fabrics donrsquo;t cut it.

But… Recognize: that second grouping is a subset (and important subset!) of workloads.

Say it with me everyone: SDS and HCI architectural approaches are now ready for the majority of x86 workloads.  

This means that every (!) customer should start to evaluate SDS/HCI models – and then find the workloads (very important ones) that are NOT a fit.

Look – the SDS/HCI market is dwarfed by the external SAN/NAS/Object appliance markets – but that shouldnrsquo;t be confused with technical fit.   Itrsquo;s a function of inertia.

Thatrsquo;s why having the strongest SDS portfolio (ScaleIO and vSAN – and extending for Isilon, ECS, and others) for Dell Technologies is so important.   Itrsquo;s also why we will continue to double down on making those SDS stacks available in the 3 consumption forms: software-only, software bundled with PowerEdge, and in the form of turnkey systems (which manifest as HCI Appliances – like VxRail, HCI Rack-Scale systems – like VxRack).

Citibank was at Dell EMC World sharing their ScaleIO story.  85PB deployed.   Full production.  Running massive numbers of workloads.   Hundreds of millions of dollars saved by their on-premises cloud, running on ScaleIO.   Not my words – the customers words.

Itrsquo;s NOT about being low cost/GB (though it is very compelling).  Itrsquo;s about starting small, and growing as you need.   Itrsquo;s about being ORDERS OF MAGNITUDE easier to scale, pool, share, automate.  Itrsquo;s about never needing to do a migration ever again.  Itrsquo;s about being able to tap into the hardware ecosystem innovation – FAST.

Thatrsquo;s why ScaleIO is so important for Dell Technologies (VMware and Dell EMC, specifically).  

vSAN is great for customers who are all about VMware, and who want an HCI operational model – all the time.  Awesome.   On stage – AIG talked about how this is the way forward for them. 

ScaleIO is for people who want a Server SAN model – something that replaces their SAN, and matches and beats itrsquo;s operational model: supporting VMware and non-VMware, scaling/sharing/pooling storage independently of compute.   On stage – Citi talked about how this is the way forward for them.

Customers are different – we uniquely support both.

Thatrsquo;s not my opinion – itrsquo;s the company opinion, and our strategic position.

This is why ScaleIO 3.0, announced at Dell EMC World is so important – for 7 reasons.

1) More Effective Usable Capacity

ScaleIO 3.0 introduces multiple space efficiency features including inline compression, space- efficient thin provisioning and snapshots to maximize storage investment.

2) Performance and Acceleration Using Dell PowerEdge 14G and NVMe Drives

ScaleIO 3.0 leads the software-defined storage market in usage of the latest Dell 14G servers as Ready Nodes, including advanced performance and metadata acceleration using NV-DIMMs and NVMe drives.

3) Balance Cost and Performance with Seamless Volume Migration

Seamless volume migration in ScaleIO 3.0 simplifies storage operations by providing the flexibility to easily rearrange and optimize data placement across storage pools and protection domains. This for example enables easy movement of volumes between flash-only, hybrid, and HDD-only pools.

4) Simplify VMware Deployment with vVols Support

ScaleIO 3.0 introduces full VMware vVols support enabling software-defined storage to be managed at a per-VM level which provides a better granularity of data services and a simplified way to manage VMs.

6) Boost Data Copy Management with Improved Snapshot Functionality

ScaleIO 3.0 increases storage efficiency and extends the use of snapshots by enabling the creation of more snapshot copies, automating snap management and adding unrestricted refresh or restore capabilities.

7) Streamline Provisioning and Management of ScaleIO Ready Nodes, and VxRack FLEX Systems

New Automated Management Services (AMS) in ScaleIO 3.0 delivers simple and complete automated lifecycle management for hardware and software when deploying ScaleIO with RHEL 7 on a physical storage node.   AMS is used in both ScaleIO Ready Nodes (ScaleIO lifecycle, hardware reporting, and OS imaging), and also in VxRack FLEX systems (ScaleIO lifecycle and plugs into a broader M&O for the full stack) that incorporate full system design, ToR for Spine-Leaf fabric multi-cabinet scaling and more.

ScaleIO 3.0 GA is targeted towards late this year.

Customers – if your storage partner (even if itrsquo;s us – who have the industry leading arrays!) is continuing to push arrays on you without helping you evaluate which workloads are a fit for SDS and HCI models – point them to my blog post, and ask for more.   Dell EMC field… If yoursquo;re trying to win over a new customer using a competitor – if yoursquo;re not leading with the most disruptive thing we have, itrsquo;s becoming as skill-testing question, an IQ test.

Still not sure?   Read Storagereview here, and here.   Still not sure?   Wow, yoursquo;re a skeptic :-)  Fine, wersquo;ve dropped the gauntlet.   Accept our challenge!   You can download and try ScaleIO and find out for yourself, right here.

Are you a ScaleIO customer?  How is it going?

Dell EMC rozszerza portfolio Cloud Data Protection

$
0
0

W trakcie Podczas konferencji Dell EMC World 2017, firma Dell EMC poinformowała o nowych możliwościach w zakresie portfolia rozwiązań do ochrony danych w chmurze, umożliwiając klientom prostą i skuteczną ochronę danych oraz tworzenie kopii zapasowych w dowolnym miejscu i czasie.

VMware Pulse IoT Center

$
0
0

Hoy en dia vivimos en una era en la que hemos creado una necesidad y es la de estar conectados a internet, ya no solo navegar, oir música, ver series o peliculas, sino a controlar casi cualquier aparato cotidiano que usemos de forma remota, que nos ofrecen una serie de servicios y aplicaciones inteligentes, de esta forma podemos hacer una definición corta de "Internet de las cosas" (IoT, por sus siglas en inglés). IoT está transformando rápidamente los modelos comerciales tradicionales y los procesos operativos para impulsar la innovación y el crecimiento de las empresas, por eso VMware ha lanzado Pulse una solución de extremo a extremo de gestión de infraestructura que permite a las organizaciones de TI, gestionar, controlar y asegurar su IoT.

Viewing all 50538 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>