Quantcast
Channel: VMware Blogs
Viewing all 50538 articles
Browse latest View live

Issuing CA Renewal operation

$
0
0
There is a german proverb “Ubung macht den Meister” that I have always tried to apply to my day to day computer science skills. While dealing with my Public Key Infrastructure in the home datacenter (#HomeDC), this means having a proper multi-tier PKI infrastructure with a Standalone Root CA, an Issuing CA, a PKI Web […]]> There is a german proverb “Ubung macht den Meister” that I have always tried to apply to my day to day computer science skills. While dealing with my Public Key Infrastructure in the home datacenter (#HomeDC), this means having a proper multi-tier PKI infrastructure with a Standalone Root CA, an Issuing CA, a PKI Web publishing server for Certificates and Certificate Revocation List. Nearly everyone can setup a PKI infrastructure with Microsoft Windows Server using Next Next Next and a 40 years Root Certificate Authority, but I had to make this a bit more challenging and make it so that it needs a yearly maintenance process to keep my PKI skills fresh.

My PKI Certificate Lifecycle is based on the following schema:

You can find the original diagram on this Microsoft PKI Certificate Lifecycle article. So instead of having a Root CA that is valid for 20 years and an Issuing CA that is valid for 10 years, I went with smaller validity periods, like 8 years for the Root CA and 4 years with the Issuing CA.

I use two different set of Generation PKI Infrastructure. The G1 on which this article is written is using a Root CA with a RSA (4096 Bits) Public Key and a sha512RSA Signature Algorithm for my G1 tier and the same for my Issuing CA. The G2 that you will see on some of the screenshots is based on a Root CA with a Elliptic curve cryptography (ECC)P521 and a sha512ECDSA Signature Algorithm.

Since my infrastructure is now running since 2015, I’m now closing in to the half-time of the Issuing CA validity period. What I decided to do is the following renewal:

  • At T+4years the Issuing CA certificate will be renewed with a new key pair. This action enforces the 4year lifetime of the RSA key pair as agreed to when designing the PKI and PKI security. This will create a new CA certificate with a new key pair. This will also force the CA to generate a new CRL file, since there is a new key pair. A CRL signed by the rsquo;oldrdquo; key pair will continue to be generated as long as the CA certificate associated with the rsquo;oldrdquo; key pair is still time valid.

When you do a certificate renewal, the new version has a (1) behind it. The certificate request would now be called Issuing CA G1(1).req

Let’s have a look at the original Issuing CA certificate on the Root CA.

And the Issuing CA detail is

This is now impacting me when I attempt to sign new certificates with a validity of over 24 months. Because those are now limited in their validity until the 4th December 2019.

The first step on the Issuing CA is to Stop Service of the PKI and launch the Renew CA Certificate process. I decided to generate a new public and private key, so my new Issuing CA request file is now named Issuing CA G1(1).Take the certificate request to the Root CA. On the Root CA, Revoke the current Issuing CA certificate as it’s Superseded and Submit new request of the Issuing CA(1) request file. Issue the new SubCA certificate. We now have a Issuing CA certificate with two fields.

I need to export the signed certificate (I used the PKCS #7 .p7b with certificate path format), move it to the Issuing CA and Import CA Certificate.

In the following steps I’m doing a few more operations on the Root CA. Now that I have Revoked (Yeah with insight I might better have not revoked the original Issuing CA… might need to update this article if I run into issues…) it’s time to do the annual publishing of the Certificate Revocation List (CRL).

I can see in my Root CA CRL now the old revoked Issuing CA certificate serial number.

Moving along on the Issuing CA in the Active Directory, I’m publishing the update Root CA CRL using certutil -dsPublish RootCA.crl RootCA

For the computers and operating systems that are not in the Active Directory and that cannot check the state of the Certificates from the AD, I have a Windows server with the IIS Web server running that publishes the CRLs. This server while having the FQDN of pki-web.bussink.org is also referred by the alias pki.bussink.org on my network. I copied the updated Issuing CA(1) certificate and the Root CA CRL on the directory mapped by the IIS server.

On the Issuing CA in the Enterprise PKI tab, you can ensure that all paths to the Certificates, Certificate Revocation List and Delta CRL work. As you in the first top part of the following screenshot I had not yet copied the Issuing CA(1) certificate. That is corrected in the bottom part of the screenshot.

Having the Issuing CA running again, I forced a Publishing of the Issuing CA CRLs. You can now see them below on the Web server in Purple. There are two sets of the CRL, the ones for the original Issuing CA certificate and the set for the updated Issuing CA(1) certificate.

The files in the red boxes are the ones I manually added to my PKI-WEB repository. They are the annual Root CA CRL and the new Issuing CA G1(1) certificate (Ad mentionned above, I might have been a bit premature in removing the original Issuing CA G1 certificate… I will update this article if I run into issues).

I wrote this blog article more for myself as a recap of the operations, as I will have to redo it before 2021. While this is only 4 years down the road, I have already I had the opportunity once in my career to setup a Root CA infrastructure in 2004 with Windows Server 2003 and have to renew it completly 10 years later in 2014. This was a lot more complicated as I had to change the PKI CryptoProvider from the old one only support SHA1 to one that supported SHA2.This is a reminder to all professionals, if you setup a PKI, you might have to work on it again a decade later.

 

 


Become an Expert on vCenter Server appliance Log File Location in 120 seconds

$
0
0

vCenter Server appliance log file location is very important thing to be aware during troubleshooting your VMware vCenter appliance and your virtual infrastruture. As most of us are more familiar with the windows vCenter log file location but for windows based (GUI based) administrators may feel little bit difficulty in identifying the  vCenter Server appliance Log File Location.

As VMware is pushing to use vCenter server appliance  as compared to windows based vCenter server. It is must to learn all the possible option of troubleshooting vCenter server appliance, when issue occurs. So I would like to share the detailed information about vCenter Server appliance Log file location. I hope this post will make you an expert on identifying the log file location of vCSA 6.5.

vCenter Server appliance Log File Location

Most of the VMware vCenter Server Appliance 6.5 logs  file are located in the directory /var/log/vmware/ . Below are the list of  vCenter server appliance log file location and description of each log file. For better readability, I have created the below table  with each of the important vCenter server log folder location along with the description of each log files of vCenter server appliance.

I have logged into vCenter Server appliance using SSH and browse towards the directory “/var/log/vmware” to see the list of all logs files of vCenter server appliance 6.5. Below image shows the directory of vCenter server appliance log file location.

/vpxd is the main vCenter Server log, consisting of all vSphere Client  and WebServices connections, internal tasks and events, and communication with the vCenter Server Agent (vpxa) on managed ESXi/ESX hosts.

You can use “cat vpxd.log | more” to read the vpxd.log file to read the vCenter server logs.

I hope this is informative for you to quickly identify and understand the vCenter server appliance log file location and its description. Thanks for reading!!!. Be social and share it in social media, if you feel worth sharing it.

The post Become an Expert on vCenter Server appliance Log File Location in 120 seconds appeared first on VMware Arena.

Nick Allen and Neil Raden have joined Wikibon.

$
0
0
Cloud computing offers businesses infrastructure plasticity, faster change, lower administration costs, and superior matching between costs and utilization. A true private cloud (TPC) option provides cloud benefits on premises, for comparable costs of public cloud. As Wikibon predicted, these options are starting to enter the market, and are poised for significant growth. With Stu Miniman, […]]> A Look at 2017rsquo;s Big Data & Machine Learning Database Up & Comers https://wikibon.com/a-look-at-2017s-big-data-machine-learning-database-up-comers/ Thu, 04 May 2017 20:21:08 +0000

New Strategic IT Priorities Campaigns

$
0
0

You asked, we listened. Starting May 8, we are launching the first of our four strategic IT priorities campaigns, available for partners via the Partner Demand Center. These solutions-focused campaigns provide you with end-to-end marketing resources, content, and offers you will need to engage prospects, drive quality leads, and close more deals.

Register now to join the May 15 vmLIVE for an exclusive insight into the first campaign.

The post New Strategic IT Priorities Campaigns appeared first on Partner News.

VMware highlights at Dell EMC World 2017

$
0
0

VMware Unveils IoT Management Solution
–https://www.vmware.com/radius/vmware-unveils-iot-management-solution/
–https://blogs.vmware.com/pulseiot/2017/05/09/introducing-vmware-pulse-iot-center/

Dell EMC VDI Complete Solutions
–https://blogs.vmware.com/euc/2017/05/dell-emc-vdi-complete-horizon.html
–http://virtualgeek.typepad.com/virtual_geek/2017/05/dell-emc-world-2017-vdi-completeready-bundle.html

Delivering Developer-Ready Infrastructure for Modern Application Development
–https://www.vmware.com/radius/dell-emc-world-developer-ready-infrastructure/
–https://blogs.vmware.com/cloudnative/microservices-meets-micro-segmentation-delivering-developer-ready-infrastructure-modern-application-development/

VMware & Google Extend Partnership to Accelerate Adoption of Chromebooks
–https://blogs.vmware.com/euc/2017/05/google-chromebooks-workspace-one.html

Dell & VMware Extend PC Management to the Firmware and BIOS
–http://blogs.air-watch.com/2017/05/dell-vmware-pc-management/#

Multiple announcements/blogs from DellEMC:https://blog.dellemc.com/en-us

On-demand General Session Keynotes:http://dellemcworld.com/live.htm#main=ondemand

The post VMware highlights at Dell EMC World 2017 appeared first on VMTN Blog.

Database Recovery Using a Zerto Failover Test

$
0
0

This guest post is byRyan ofVIRTUBYTES where you can find his back catalog of posts. Ryan writes about VMware products and a lot of related infrastructure technologies.

A few weeks back, wediscussedZertorsquo;s ability to perform point-in-time file level restores directly from the replication journal. However, what if the data you need from 30 minutes ago isnrsquo;t readily compiled into a file or requires manual intervention to produce; such as a database backup or a PST export of Exchange mailbox items? Introducing Zerto Failover Tests for data recovery.

AZerto Failover Test has the ability to spin up a virtual machine from a specific point-in-time to an isolated network. From there, data can be manually compiled and exported.

Although the failover test process is more step intensive than a File Restore, it still addresses a vital need for many organizations; having that ability to restore application data from a specific moment. When compared to traditional backups, the ability to restore data seconds before corruption or loss is critical.

In this walkthrough, we will focus on a database backup and recovery. To do so, we will perform a failover test to an isolated network, backup the database, and finally, extract the SQL database to our production location. As this particular test network is completely isolated, we will be using VMware PowerCLI to extract the backup file.

Initiate Failover Test

Log into the ZVM and navigate to the Failover actions in the bottom pane. Ensure the Failover toggle is in theTestposition and clickFailover.

The Failover Test wizard will now begin. Select the appropriateVPGwhere the database resides.

Select the desired execution parameters for the VPG. For this test, we will set the pertinentCheckpointto the required point in time.

Lastly, clickStart Failover Testto begin the process.

Monitor the test failover initiation from the ZVMMonitoringtab or vCenterrsquo;s Recent Tasks.

The test failover initiation will:

  • Register the vm at the recovery site.
  • Create a temporary scratch VMDK for changes made to the test vm.
  • Connect the vm to the specified test network.
  • Boot the vm for testing.
  • Leave the production vm powered on and continue replicating changes.

Backup Database

As mentioned previously, this test failover is set up on an isolated network. Therefore, we will need to access the vm from the virtual machine console in vCenter.

To begin, log into the recovery side vCenter and locate the test vm. It will be registered in the format ofvmname-testing recovery.

Select theOpen Virtual Machine Consoleicon to access thevm.

Once the console has opened, log into the machine with the appropriate credentials. Next, navigate toSQL Server Management Studioand connect to the database server.

Once connected, browse to the specific database in the left pane of Management Studio.Right-clickthe database and underTasksselectBackup…

Next, configure the save location and options for the database backup. In this instance, the backup type isFullandDestinationlocation for the backup is in a temp directory. If you would like to change the directory, remove the existing location and add a new location.

To finish, set the Media and Backup options as desired. ClickOkto start the backup.

The below message appears after a successful backup.

Extract Database Backup

Now that we have a backup of our database, we can extract the file from the isolated vm utilizing VMware PowerCLI; specifically, theCopy-VMGuestFilecmdlet. TheCopy-VMGuestFilecmdlet enables administrators to copy a file to or from the guest OS of a vm using VMware Tools.

Connect to the recovery side vCenter through using theConnect-VIServercmdlet. Check outthisVMware article on the basics of connecting to vCenter.

C:\PS>Connect-VIServer -Server vcenter01 -User admin -Password pass

For our example, we will copy the .bak file from the test virtual machine to a folder on our local drive.

Copy-VMGuestFile -Source c:\Temp\Test.bak -Destination c:\temp\ -VM lsquo;testVM – testing recoveryrsquo; -GuestToLocal -GuestUser domain\user -GuestPassword password

With the backup extracted, restore the database as needed.

Stop Failover Test

Once you have completed the extraction, navigate back the ZVM. Go toTasksfrom theMonitoringtab and pressStopon the failover task.

The Stop Test window will appear and allow you to enter theResultand applicableNotes. SelectStopto stop the Test.

Zerto will now begin the stop tasks. The teardown process will:

  • Remove failover test vm from recovery inventory.
  • Delete scratch VMDKs used for testing.
  • Keep replicated changes made in production during the failover test.

Thatrsquo;s it! Wersquo;ve now been able to grab a database backup seconds before data corruption occurred.

NOTE– During the failover test, all IO writes are written to the scratch volume. To ensure the volume does not fill up and cause associated issues, limit the duration of the failover test.

How to put Nutanix Acropolis host into maintenance mode

$
0
0

Today is a quick post on how to check or put Acropolis node into maintenance mode. How to check if Acropolis hosts is in maintenance mode. Log in to CVM over SSH and rget...

The post How to put Nutanix Acropolis host into maintenance mode appeared first on VMwaremine - Artur Krzywdzinski | Nutanix.

Running ScaleIO in the HomeDC

$
0
0
In this Post, I will describe how I have come about in deploying a ScaleIO Software-Defined Storage in the Home Datacenter. Over the course of 2016, I have upgraded my clusters from VMware Virtual SAN Hybrid (Flash for Caching Tier and SAS Enterprise disks for CapacityTier) to an All Flash Tiering. This has released Multiple […]]> In this Post, I will describe how I have come about in deploying a ScaleIO Software-Defined Storage in the Home Datacenter. Over the course of 2016, I have upgraded my clusters from VMware Virtual SAN Hybrid (Flash for Caching Tier and SAS Enterprise disks for CapacityTier) to an All Flash Tiering. This has released Multiple 4TB SAS Enteprise disk from the vSAN config. Rather than remove these from the hosts, I decided to learn and test the Free and Frictionless edition of DellEMC ScaleIO.

My ScaleIO design crosses the boundaries of three VMware vSphere Clusters, and is hosted across eight Tower case servers in the Home Datacenter. In a normal production ScaleIO cluster, the recommendation is to have a minimum of 6 disk drivers per ScaleIO Data Server (the servers shading the storage). As you will see, in my design I spread the SAS Enterprise disks across the eight servers.

I’m not going to cover the definition of Protection Domains or Storage Pools in this article, but for this design, I have a single Protection Domain (pd1) with a single Storage Pool which I named SAS_pool. I did device the Protection Domain into three separate Fault Sets (fs1, fs2 and fs3), so as to spread failures across the hosts based on the power phase use in my datacenter.

I’ve run ScaleIO across my cluster for 10months for some specific workloads that I just could not fit or did not want to fit on my VMware vSAN All-Flash environment.

Here is a large screenshot of my ScaleIO configuration as it’s re-balancing the workload across the hosts.

 

Each ScaleIO Data Server (SDS) was a CentOS 7 VM running on the ESXi and had two or three physical devices attached to it using RDM. Each SDS had a SSD device for the RFcache (Read Cache) and a single or dual SAS disk drive.

At the peak this deployment, the ScaleIO config had 41.8TB UsableStorage. Iset the Spare Capacity at 8TB, leaving 34.5TB usable storage. With the double parity on the storage objects, I could only store 17.2TB of data to my VMs and my vSphere hosts.

Over the past 10 month of using ScaleIO, I’ve found two main limitations.

  1. The ScaleIO release cycle, and even more so for people using the Free & Frictionless version of ScaleIO. The release cycle is out of sync with the vSphere release. Some version are only released to Dell EMC customer with support contracts, and some version take between 6 and 8 weeks to move from the restricted access to a public access. At the end of March 2017, there was no version of ScaleIO that supports vSphere 6.5.
  2. Maintenance & Operations. As I wanted or needed to upgrade an ESXi host with a patch, a driver change or install a new version of NSX-v, I had to plan the power off the SDS VM running on the ESXi host. You can only put a single SDS in a planned maintenance mode per Protection Domain. So only one ESXi could be patched at a time. A simple cluster upgrade process with a DRS backed network, would now take much longer requiremore manual steps, put the SDS VM in maintenance mode, shutdown the SDS VM (and take the time to patch the Linux in the SDS VM), putting the host in maintenance mode, patching ESXi, restarting ESXi, exit maintenance mode, restart the SDS VM, exit the ScaleIO Maintenance mode, wait for the ScaleIO to rebuild the redundancyand move to the next host.

I’ve now decommissioned the ScaleIO storage tier as I needed to migrate to vSphere 6.5 for some new product testing.


Off to Tech Field Day 14!

$
0
0

Just barely made my flight from Charlotte to Boston thanks to the TSA…. But I’m on my way! This will be my second Field Day eventwith the first being Cloud Field Day 1. Looks like we’ll be hearing from:

  • NetApp
  • Datrium
  • ClearSky Data
  • Turbonomic

Most of these I’m already pretty familiar with so it should be a good update as well as a chance to ak questions. The outlier for me is ClearSky and I’m anxious to hear what they have to say. Recently I looked in to them as I was inventorying the many cloud/storage companies out there so this is good timing.

Keep up with us on Twitter (#TFD14) as well as watch the live streams right on TechFieldDay.com.

CentOS 6.8 基礎設定 (2) - 組態設定網路功能

$
0
0

前言

最近工作關係開始又要回味 CentOS 了,在本次實作環境中採用的是 CentOS-6.8-x86_64-minimal.iso 映像檔,也就是 CentOS 6.8 最小化安裝版本 (Minimal Install)。為何不用最新的 CentOS 6.9 版本? 因為,最新的 LIS 4.1.3-2 僅支援至 CentOS 6.8,所以便以 CentOS 6.8 版本開始回味起了,那麼開始來玩玩吧。



實作環境

  • Windows Server 2016 Hyper-V
  • CentOS 6.8 x86-64 (Kernel version 2.6.32-642.el6)



組態設定網路功能

建立好使用者帳號後接下來便是設定 CentOS 的網路功能,在本文設定中網路功能是以設定固定 IP 位址來進行說明。你可以透過 2 種方式設定固定 IP 位址,一為使用指令 system-config-network 來進行互動設定,另外一種方式則為手動將固定 IP 位址、網路遮罩等相關資訊寫入「/etc/sysconfig/network-scripts/ifcfg-eth0」網卡設定檔中,而預設閘道及主機名稱則是寫入「/etc/sysconfig/network」設定檔中,最後則是將 DNS 名稱解析資訊寫入「/etc/resolve.conf」設定檔中。
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
IPADDR=10.10.75.8
NETMASK=255.255.255.0
IPV6INIT=no
# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=centos68.weithenn.org
GATEWAY=10.10.75.254
# cat /etc/resolv.conf
search weithenn.org
nameserver 168.95.1.1
nameserver 8.8.8.8
# cat /etc/hosts
127.0.0.1 localhost
# service network restart
Shutting down interface eth0: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: Determining if ip address 10.10.75.8 is already in use for device eth0...
[ OK ]

圖、組態設定固定 IP 位址

當 CentOS 主機的網路服務重新啟動並套用新的組態設定後,接著便可以使用 ping 指令來判斷主機是否能順利連上網際網路及進行名稱解析的動作,或者藉此判斷此台主機的網路通訊是卡在哪個環節上以便除錯。
# ping -c2 127.0.0.1   //檢查 Loopback
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.045 ms
--- 127.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.024/0.034/0.045/0.012 ms
# ping -c2 10.10.75.8   //檢查固定 IP 位址
PING 10.10.75.8 (10.10.75.8) 56(84) bytes of data.
64 bytes from 10.10.75.8: icmp_seq=1 ttl=64 time=0.026 ms
64 bytes from 10.10.75.8: icmp_seq=2 ttl=64 time=0.047 ms
--- 10.10.75.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.026/0.036/0.047/0.012 ms
# ping -c2 10.10.75.254  //檢查 CentOS 與預設閘道之間的連線
PING 10.10.75.254 (10.10.75.254) 56(84) bytes of data.
64 bytes from 10.10.75.254: icmp_seq=1 ttl=128 time=0.774 ms
64 bytes from 10.10.75.254: icmp_seq=2 ttl=128 time=0.432 ms
--- 10.10.75.254 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1004ms
rtt min/avg/max/mdev = 0.432/0.603/0.774/0.171 ms
# ping -c2 8.8.8.8      //檢查 CentOS 是否能夠與指定的 DNS 伺服器連線
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=128 time=9.22 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=128 time=9.57 ms
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1012ms
rtt min/avg/max/mdev = 9.221/9.398/9.576/0.202 ms
# ping -c2 tw.yahoo.com  //檢查 CentOS 能否順利進行名稱解析
PING oob-media-router-fp1.prod.media.wg1.b.yahoo.com (106.10.178.36) 56(84) bytes of data.
64 bytes from media-router-fp1.prod.media.vip.sg3.yahoo.com (106.10.178.36): icmp_seq=1 ttl=128 time=89.1 ms
64 bytes from media-router-fp1.prod.media.vip.sg3.yahoo.com (106.10.178.36): icmp_seq=2 ttl=128 time=89.5 ms
--- oob-media-router-fp1.prod.media.wg1.b.yahoo.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1093ms
rtt min/avg/max/mdev = 89.173/89.382/89.592/0.365 ms

圖、檢查 CentOS 主機網路組態是否正確運作

CentOS 6.8 基礎設定 (3) - 簡述 SELinux 安全性增強機制

$
0
0

前言

最近工作關係開始又要回味 CentOS 了,在本次實作環境中採用的是 CentOS-6.8-x86_64-minimal.iso 映像檔,也就是 CentOS 6.8 最小化安裝版本 (Minimal Install)。為何不用最新的 CentOS 6.9 版本? 因為,最新的 LIS 4.1.3-2 僅支援至 CentOS 6.8,所以便以 CentOS 6.8 版本開始回味起了,那麼開始來玩玩吧。



實作環境

  • Windows Server 2016 Hyper-V
  • CentOS 6.8 x86-64 (Kernel version 2.6.32-642.el6)



修改 SELinux 安全增強機制

Linux作業系統從核心 2.6 版本開始預設會自動載入安全增強機制 SELinux ( Security-Enhanced Linux) 核心模組。SELinux 是由美國國家安全局 NSA (National Security Agency) 所開發,並且在 2000 年 12 月時將此核心模組發行給開放原始碼的開發社群,以便有效加強 Linux 整體安全性。

SELinux 為基於保護原則、作業系統中檔案結構及檔案權限的完整性原則所設計,此完整性原則可以有效針對入侵行為,以及企圖跨越系統安全架構等設計不良的應用程式對作業系統所造成的破壞,因此可以提供更安全的強制存取控制架構,來與作業系統的核心和主要子系統協同運作。在這樣的架構下相關的服務 (Daemon) 只能存取屬於該服務帳號所能存取的資料夾及檔案權限,若是超過所能存取的權限範圍則 SELinux 便會阻擋該服務的存取行為。所以若主機所架設的服務出現安全性漏洞導致被攻擊時 SELinux 能夠有效將攻擊所造成的損失降到最低。

簡單來說啟用了 SELinux 安全增強機制後的 Linux 作業系統,其檔案權限便不僅僅是傳統上的三種權限-讀取 r、寫入 w、執行 x-,及身份-擁有者 Owner、群組 Group、其它人 Others,而是整個主機內的檔案系統,將會套用更細微的權限及身份設定並且具有完整性架構。然而也因為 SELinux 安全增強機制及完整性原則,常常會造成 Linux 初學者因為不了解檔案系統及相關概念,進而導致設定相關網路服務時,因為違反了 SELinux 安全機制或者完整性原則,而導致網路服務無法啟動,或者無法存取系統資料(因為被 SELinux 安全機制給阻擋住了)。因此我通常會建議初學者可以先將此增強安全機制設定為警告通知,或者暫時關閉。等以後對於 CentOS 作業系統有更深的認識後再將此功能啟用。當然這樣的情況是自行測試或學習時,使用者若是用於企業營運時則強烈建議一定要開啟 SELinux 安全增強機制來提升及保護主機安全性。

要修改 SELinux 安全增強機制的設定,您可以透過修改「/etc/sysconfig/selinux」設定檔,或者使用指令 system-config-securitylevel 進入互動設定視窗進行設定之後再將主機重新啟動即可套用變更,SELinux 安全增強機制共有三種運作模式說明如下:

  • enforcing: 啟動模式 (預設值),SELinux 安全增強機制啟動將會阻擋不當的存取行為。
  • permissive: 寬容模式,當系統發生違反 SELinux 安全增強機制時僅僅顯示警告訊息而不會實際進行阻擋的動作,此模式很適合有心學習 SELinux 機制的學習者。
  • disabled: 禁用模式,完全將 SELinux 安全增強機制禁用。


筆者建議您可以將設定值修改為寬容模式 (permissive),因為當您的操作行為違反 SELinux 安全增強機制時會顯示警告通知您,因此您可以有效學習到哪些操作或者哪些動作是會被 SELinux 阻擋哪些不會,這樣可以讓您日後真正開啟 SELinux 安全增強機制時,不致被卡住並且早日提升您所管理的主機系統整體安全性。您可以透過 sestatus指令來判斷目前主機中 SELinux 的運作模式及狀態,此設定值變更後必須要將主機重新啟動才能套用變更,當重新啟動後請記得再次使用 sestatus 指令以便確認您的修改正確有效。
# sestatus
SELinux status: enabled
SELinuxfs mount: /selinux
Current mode: enforcing
Mode from config file: enforcing
Policy version: 24
Policy from config file: targeted
# vi /etc/sysconfig/selinux
SELINUX=enforcing //預設值
SELINUX=permissive //修改後
# setenforce 0 //將現有設定值改為 permissive
# sestatus
SELinux status: enabled
SELinuxfs mount: /selinux
Current mode: permissive
Mode from config file: permissive
Policy version: 24
Policy from config file: targeted

圖、調整 SELinux 安全性增強機制

CentOS 6.8 基礎設定 (4) - 組態設定 VIM 及 Bash Shell 操作環境

$
0
0

前言

最近工作關係開始又要回味 CentOS 了,在本次實作環境中採用的是 CentOS-6.8-x86_64-minimal.iso 映像檔,也就是 CentOS 6.8 最小化安裝版本 (Minimal Install)。為何不用最新的 CentOS 6.9 版本? 因為,最新的 LIS 4.1.3-2 僅支援至 CentOS 6.8,所以便以 CentOS 6.8 版本開始回味起了,那麼開始來玩玩吧。



實作環境

  • Windows Server 2016 Hyper-V
  • CentOS 6.8 x86-64 (Kernel version 2.6.32-642.el6)
  • vim-enhanced.x86_64 2:7.4.629-5.el6_8.1



設定 VIM 編輯器操作環境

VI (Visual Interface) 為 Unix-Like 預設內建的檔案編輯器,然而此編輯器對於 Linux 初學者來說比較容易感覺到使用不易。CentOS 作業系統預設會安裝較容易使用而且功能更為強大的檔案編輯器 VIM (Vi Imitation) ,建議 Linux 初學者可以使用此編輯器進行檔案編修,相信可以較為順手。

在本文環境中,因為採用的是 CentOS 6.8 Minimal Install,所以預設並不會安裝 VIM 套件而是使用預設的 VI。因此,若是覺得 VI 不順手的話可以使用指令「yum -y install vim」安裝 VIM 套件。

圖、安裝 VIM 套件

圖、VIM 套件安裝成功

此外,VIM 檔案編輯器預設功能雖然已經很強大,但是您仍可以依需求加上相關參數設定使得 VIM 編輯器更為強大更為貼近您的使用需求。以下為個人習慣設定的 VIM 參數設定值:
# cat .vimrc
set number
set hls
set ic
set ai
set enc=utf8
# source ~/.vimrc


當重新套用 VIM 編輯器環境設定後,再次嘗試編輯檔案便會發現 VIM 環境已經套用生效。
圖、VIM 環境設定套用生效



設定 Bash Shell 操作環境

對於許多 Linux 的使用者來說習慣的 Shell 應該是系統預設使用的 bash (Bourne-Again Shell),CentOS 預設支援的 Shell 除了有 bash 之外還支援 sh (Bourne Shell)、csh (C Shell)、tcsh (TENEX C Shell)、ksh (Korn Shell) 等 Shell。基本上,使用哪種 Shell 全憑個人使用習慣也就是順手即可。

使用 Bash Shell在不設定任何參數的情況下,便可以擁有按下【Tab】鍵,即自動補齊檔名及搜尋上一次輸入指令的功能。所謂【Tab】鍵補齊檔名功能是什麼意思呢?舉個例子來說,假如我們想要查看主機的日期及時間資訊時,會鍵入 date 指令,當輸入 da 之後便按下【Tab】鍵,此時作業系統會尋找系統中 da 開頭的相關指令,由於系統中 da 開頭的指令只有二個分別是 date 及 dateconfig。因此當按下【Tab】鍵進行補齊檔名功能時便會先自動補齊為 date 指令。

Bash Shell 的補齊檔名功能不僅僅能使用於指令方面,對於檔案及目錄也具有相同的功能。以搜尋上一次輸入指令的功能為例,分別輸入了 ls 某個目錄內容及 cd 到某個目錄內,當您想要再次執行時只要打 ls 再按【上方向鍵】則 Bash Shell 會自動找出最近執行過開頭為 ls 的指令,這樣的功能對於操作作業系統來說非常方便。

除了預設的功能之外我們可以設定 Bash Shell 的環境變數來加強操作的便利性,以剛才測試補齊檔名功能執行的 date 指令來說,其實該指令的完整路徑為 /bin/date,但是為何當我們輸入 date 指令按下 Enter 鍵後便可順利執行該指令? 這是因為預設的 tcsh Shell 環境設定檔中已經將作業系統經常會使用到的指令路徑載入環境變數中(參數 PATH),因此我們才可以在不用鍵入絕對路徑的情況下直接執行相關指令。

以採用 Bash Shell 為例當使用者登入 CentOS 主機後,該使用者帳號會依序載入「/etc/profile」通用環境設定檔,接著則是載入個人家目錄下的「~/.bash_profile」「~/.bashrc」個人環境設定檔。倘若,管理者設定的通用環境設定檔與個人環境設定檔內容發生衝突時,系統將會套用個人環境設定檔為最後結果(Last Match)。

當完成 Bash Shell 環境設定檔之後,可以使用指令「source ~/.bashrc」立即套用生效或是登出/登入也可以,以下為個人習慣設定於個人家目錄下 .bashrc 的個人環境設定檔內容:
# cat ~/.bashrc
setterm -blength 0
alias vi='vim -S /home/user/weithenn/.vimrc'
alias ll='ls -al --color'
alias grep='grep --color'
alias h='history 100'
# source ~/.bashrc


當重新套用 Bash Shell 環境設定後,嘗試執行一下 grep 指令功能便會發現已經套用生效。

圖、Bash Shell 環境設定套用生效

How to Drive Better SAP Performance with Virtualization Layer Visibility

$
0
0

By: Christian Fernando

 

When using SAP native tools such as SAP Solution Manager, it lacks visibility into the underlying virtualization layers, which can result in a blind spot in understanding the complete performance story. The virtualization layer includes the virtual machine, ESXi server, CPU, memory and storage. The ability to quickly see contention of infrastructure resources and find root causes in the application and data tier helps to keep users happy and access/uptime SLAs at a high level.

In this post, Irsquo;ll provide insight into how you can garner visibility into the layers below virtualization to help determine usage patterns in the business, as well as how you can dynamically respond to those changes without affecting SLAs. This blog post is the second in my series highlighting how to drive better performance for your SAP workloads. In case you missed it, check out the first: How to Pinpoint Storage Problems in Your SAP Environment.

 

Topology of a Virtualized SAP Environment

The following is an example topology of a virtualized environment in vROps:

  1. EPP is the SAP System
  2. SAP APP1 is the SAP Host
  3. D00 is the ABAP Instance
  4. SAP APP1 is the VMware VM

 

VMwarersquo;s vRealize Operations (vROps) can also show additional relationships that include all my SAP HANA and/or other database instances.

 

Real-World Scenario: CPU and Physical Contention

 

With Blue Medorarsquo;s SAP Management Pack for vRealize Operations, you can trace from the SAP application to the SAP host. Through the power of relationships, you can trace from the virtual machine running the application to the VMware ESXi server. In this case, there is a higher than normal spike in CPU and physical memory used.

 

With visibility to the application and data tiers, you can answer key questions as to why this may be happening:

  1. Is the spike sudden and noticing it for the first time?
  2. Is the spike now a known and periodic one? In this case what is the frequency and cycle time?
  3. What do we know about any new business usage patterns?
  4. Have there been any new usage policies and/or modules that have been launched?
  5. Have the number of users on the platform increased?

 

Seeing both the IT and SAP admin views into the application and IT infrastructure helps us get to the source of higher levels of usage and the causes of those spikes. With this knowledge, you can quickly and efficiently respond to the business needs that are triggering the higher demand.

The deep alerting feature provides alerts when the disks, which host the SAP application and database instances, are running out of space. This prevents service interruption of the business critical SAP application environment, which allows us to take action before lights go out on application access.

Letrsquo;s take a look at a system with high load during a particular time, as shown below:

 

As you can see, we can garner key metrics from across the virtualization stack:

  1. Datastore I/O | Reads – This is from the VMware layer and shows a high level of Read Operations during the scheduled transaction run time.
  2. Performance | Block Reads– This is from the VNX SAN MP at the LUN level from Blue Medorarsquo;s EMC VNX Management Pack.
  3. CPU | Ready (%)– This is a spike in the CPUrsquo;s utilization of the SAP HANA database server.

 

How to Extend Visibility into Storage for SAP

 

Understanding whether storage and capacity issues are causing performance problems simplifies the troubleshooting process — giving you immediate insight into what part of the IT stack may be causing issues. The Blue Medora SAP Management Pack gives you extended visibility so you can clearly drill down to the root cause of issues, reducing mean-time-to-innocence and eliminating alert storms.

The management pack includes more than 3,000 key performance metrics, including active session counts and system utilization. Combined with a series of out-of-the-box dashboards and reports, gain real-time access to understand issues as they arise instead of when they derail your performance.

To learn more about the SAP Management Pack from Blue Medora or to download a free trial, please visit the True Visibility Suite for VMware vRealize Operations page on Blue Medorarsquo;s website.

 

The post How to Drive Better SAP Performance with Virtualization Layer Visibility appeared first on VMware Cloud Management.

CentOS 6.8 基礎設定 (5) - 設定 sudo 管理員帳號管理機制

$
0
0

前言

最近工作關係開始又要回味 CentOS 了,在本次實作環境中採用的是 CentOS-6.8-x86_64-minimal.iso 映像檔,也就是 CentOS 6.8 最小化安裝版本 (Minimal Install)。為何不用最新的 CentOS 6.9 版本? 因為,最新的 LIS 4.1.3-2 僅支援至 CentOS 6.8,所以便以 CentOS 6.8 版本開始回味起了,那麼開始來玩玩吧。



實作環境

  • Windows Server 2016 Hyper-V
  • CentOS 6.8 x86-64 (Kernel version 2.6.32-642.el6)
  • sudo-1.8.6p3-24.el6.x86_64



設定 sudo 管理員帳號管理機制

CentOS作業系統當中 root 使用者帳號被稱為超級使用者帳號,此帳號為整個作業系統中權限最大的管理帳號,權限大到可以直接將作業系統自我毀滅。由於 root 超級使用者帳號權限如此之大,因此強烈建議您應該使用一般使用者帳號登入主機進行操作,待需要執行的動作需要提升權限時才切換為管理帳號進行操作,以免因為一時疏忽或者不慎手誤,造成系統或服務損壞,例如,GitLab 史上最大危機:工程師誤刪大量資料,導致線上服務崩潰 | TechNews 科技新報。

當您所管理的 CentOS 主機同時擁有多個管理者進行管理時,您該如何確定是其中哪個管理者使用了 root 管理帳號對系統做了什麼事情? 例如,當您想要得知是哪個管理者在哪個時間切換為 root 管理帳號並且對系統執行了哪些指令,傳統的切換方式 su –就不符合這樣的需求了,有鑑於此我們可以透過設定 sudo 來達成這樣的查核需求。

Sudo 套件就是為了彌補作業系統中內建的身份切換指令 su 不足所發展出來的軟體套件,透過設定此套件後我們可以建立相關的使用者權限群組,並且給予不同權限的指令來達到控管使用者權限的目的,同時配合相關參數設定我們可以隨時查閱哪位使用者執行過 sudo 指令來提升權限,並且能查出該使用者對於系統在權限提升之後執行了哪些動作,以便進行事後的追查。

首先,請先使用 rpm 及 which 指令來查詢系統中是否已經安裝 sudo 套件(預設情況下會安裝此套件)以及相關指令是否存在,確認目前系統中有安裝此套件時請接著使用 visudo指令來修改 sudo 設定檔內容。建議您不要直接使用 VI 或 VIM 編輯器來修改 sudo 設定檔,原因除了 visudo 指令會自行尋找 sudo 設定檔 (/etc/sudoers) 並且進入編輯模式之外,當我們修改完成後若設定檔內容中有發生語法或斷行等錯誤時,系統會在顯示警告訊息提醒我們哪裡發生語法錯誤。
# rpm -qa sudo
sudo-1.8.6p3-24.el6.x86_64
# which sudo visudo
/usr/bin/sudo
/usr/sbin/visudo


在此次實作中我們會修改 sudo 設定檔內容為將 wheel 群組那行的註解符號拿掉,並且加上 Log 記錄檔的內容「/var/log/sudo.log」,當此 sudo 設定檔設定完畢後,後續只要有人執行 sudo 指令提升權限至管理者身份時便會觸發到剛才設定檔中的 Log 設定,此時系統會自動產生 Log 檔案並將相關資訊寫入其中。相關操作如下所示:
# visudo
%wheel  ALL=(ALL)       ALL                   //拿掉開頭註解
Defaults log_host, logfile=/var/log/sudo.log  //加上此行


上述 sudo 設定檔內容中表示只要屬於 wheel 群組內的使用者帳號,便可以使用 sudo 指令來暫時提升權限為管理者帳號進行操作。當使用者第 1 次執行 sudo 指令時系統會再次詢問該使用者密碼,當成功通過密碼驗證 (Authentication)之後便會暫時切換授權 (Authorization) 身份為管理者帳號 root 來執行其指令,並且在 5 分鐘之內若該使用者再次執行 sudo 指令時,系統便不會再次詢問使用者密碼。

接下來我們著手來測試剛才設定的 sudo 記錄檔機制是否正常運作,請您另外開啟一個 SSH Client 視窗並使用一般使用者帳號遠端登入 CentOS 主機。例如,使用 weithenn 這個一般使用者帳號(請確定該使用者帳號已加入 wheel 群組)登入系統並嘗試執行 vipw 指令試圖修改使用者帳號設定檔內容,相信會得到權限被拒絕 (Permission denied) 的錯誤訊息回應。此時您可以使用 sudo 指令搭配剛才的 vipw 指令再次執行即可修改使用者帳號設定檔內容。
[weithenn@centos68 ~]$ vipw      //嘗試修改使用者帳號設定
vipw: Permission denied.
vipw: Couldn't lock file: Permission denied
vipw: /etc/passwd is unchanged
[weithenn@centos68 ~]$ id        //確認已加入 wheel 群組
uid=500(weithenn) gid=500(weithenn) groups=500(weithenn),10(wheel) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[weithenn@centos68 ~]$ sudo vipw //搭配 sudo 機制提升權限

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for weithenn:    //鍵入密碼,通過驗證程序執行提升權限

vipw: /etc/passwd is unchanged

圖、測試 sudo 機制

當上述指令執行完畢後您可以接著查看 sudo 記錄檔便會看到相關的記錄內容,從 sudo 記錄檔內容中我們可以確定 sudo 記錄檔機制目前正確運作中。從 sudo 記錄檔中可以清楚得知是在什麼時間點 (May 11 11:08:46)、系統中哪個使用者帳號 (weithenn)、在哪一台主機上 (centos68)、從遠端登入此台主機 (pts/1)、在系統中哪個路徑 (/home/user/weithenn)、切換成什麼身份 (root)、執行什麼指令 (/usr/sbin/vipw)
# tail /var/log/sudo.log
May 11 11:08:46 : weithenn : HOST=centos68 : TTY=pts/1 ; PWD=/home/user/weithenn
    ; USER=root ; COMMAND=/usr/sbin/vipw

圖、查看 sudo 記錄檔內容

CentOS 6.8 基礎設定 (6) - 禁止 Root 帳號本機及 SSH 遠端登入

$
0
0

前言

最近工作關係開始又要回味 CentOS 了,在本次實作環境中採用的是 CentOS-6.8-x86_64-minimal.iso 映像檔,也就是 CentOS 6.8 最小化安裝版本 (Minimal Install)。為何不用最新的 CentOS 6.9 版本? 因為,最新的 LIS 4.1.3-2 僅支援至 CentOS 6.8,所以便以 CentOS 6.8 版本開始回味起了,那麼開始來玩玩吧。



實作環境

  • Windows Server 2016 Hyper-V
  • CentOS 6.8 x86-64 (Kernel version 2.6.32-642.el6)



禁止 root 管理帳號 SSH 遠端登入

在預設的情況下,您可以直接使用 root 管理帳號來遠端登入 Linux 作業系統進行管理,然而在管理作業系統上通常安全性便利性是相對的二個拉扯點。所以,當您所管理的作業系統其操作便利性愈高則安全性通常會相對的降低,在此建議您關閉 Linux 預設允許 root 管理者帳號可以遠端登入管理系統,原因如下:

  • 主機將會增加了被入侵的機會。因為,在管理者帳號已知的情況下,剩下就是嘗試登入密碼了,如此一來很容易遭受暴力猜測密碼攻擊。
  • 當一台主機有眾多管理者時大家皆使用 root 管理者帳號登入系統進行管理動作,則誰修改了某個檔案內容或執行了哪些動作均無法稽核,因為記錄的資料都是 root。
  • 直接使用 root 管理者帳號登入系統進行管理,若是在操作過程中不慎下錯指令時有極大的可能會把系統給毀掉。例如原本是想刪除根目錄下的 test 資料夾 rm –rf /test 若不慎在操作時不小心多個空格 rm –rf / test,則對於作業系統來說是要刪除根目錄 (/) 及目前所在的 test 資料夾。


要將 CentOS 主機預設允許 root 管理者帳號遠端登入的功能關閉 (PermitRootLogin yes -> no),可以透過修改「/etc/ssh/sshd_config」設定檔後再重新載入 SSH 服務即可套用變更,套用完成後您可以測試是否無法使用 root 管理帳號遠端登入主機以便確定修改是否生效。

此外,有時可能會遇到一種情況,便是遠端登入主機時輸入帳號後怎麼要等很久才能輸入密碼? 會有這樣的狀況發生是因為 CentOS 在啟動 SSH 服務時,預設會配合使用名稱解析所導致,所以您主機運作的網路環境中名稱解析服務已經運作正常則不會有此問題發生。倘若,發生這樣的問題時,請檢查 DNS 名稱解析中反向解析對於此主機的解析情況,若此台主機所在的網路環境中並沒有反向名稱解析的機制,您可取消 SSH 服務中預設會使用到名稱解析的動作即可解決此一問題 (UseDNS yes -> no)

最後,預設情況下 SSH 的 Listen Port 為 22,為了安全性考量也可以把預設 SSH Listen Port 改掉,例如,改為 Listen Port 22168
# vi /etc/ssh/sshd_config
#PermitRootLogin yes   //預設值,禁止 Root 帳號遠端登入
PermitRootLogin no    //修改後
#UseDNS yes            //預設值,啟用 DNS 名稱解析
UseDNS no             //修改後
#Port 22               //預設值,SSH Listen Port
Port 22168            //修改後
# service sshd reload
Reloading sshd:                           [  OK  ]

圖、修改 SSH 組態並重新載入服務

重新載入 SSH 服務後,可以使用「netstat -tunpl」指令確認 sshd 服務是否把 Listen Port 改為 22168。
# netstat -tunpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
tcp        0      0 0.0.0.0:22168              0.0.0.0:*                   LISTEN      3019/sshd
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      1116/master
tcp        0      0 :::22168                    :::*                        LISTEN      3019/sshd
tcp        0      0 ::1:25                      :::*                        LISTEN      1116/master

圖、確認 SSH 服務 Listen Port 是否變更

此時,要記得修改 IPTables 防火牆規則,把允許 SSH Port 22 通行的規則改為 Port 22168。有關 IPTables 防火牆規則的操作,請參考 CentOS 6.8 基礎設定 (10) - 調整 IPTables 防火牆規則 文章。
# service iptables reload
iptables: Trying to reload firewall rules:                 [  OK  ]
# service iptables status
Table: filter
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination
1    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED
2    ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0
3    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
4    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:22168
5    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination
1    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination

圖、調整 IPTables 防火牆規則

Build 2017 - Satya Nadella o chmurowym ekosystemie

$
0
0

W Seattle wystartowała konferencja Microsoft Build 2017, największe doroczne wydarzenie Microsoft skierowane do programistów, choć ze względu na zapowiedzi nowości obejmujące niemal wszystkie aspekty działalności giganta z Redmond, z dużym zainteresowaniem odbierana przez innych specjalistów (nie tylko) ze świata IT. Po spocie wideo, prezentującym miasto goszczące uczestników konferencji, zakończonym napisem, który można zinterpretować dwuznacznie: "Welcome to Cloud City", konferencję otworzył Satya Nadella, CEO Microsoft.

Dell EMC World 2017 – Dell Technologies Cloud Strategy Session Notes

$
0
0
Disclaimer: I recently attendedDell EMC World 2017. My flights, accommodation and conference pass were paid for by Dell EMC via the Dell EMC Elect program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials […]]>Disclaimer: I recently attendedDell EMC World 2017. My flights, accommodation and conference pass were paid for by Dell EMC via the Dell EMC Elect program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and donrsquo;t form part of my blog posts, but could influence future discussions.

I attended a media and influencer session covering Dell EMC’s Cloud Strategy. Unfortunately I wasn’t able to stay for the entire session, but I thought these notes might be useful for folks out there interested in Dell EMC’s approach to this somewhat hot topic.

 

Cloud Strategy Overview

Jeremy Burtoncomes on stage.rsquo;Irsquo;m very much the warmup actrdquo;. Talks briefly about Dell’s application-centric view of the world:

  • Mission-critical applications – IO intensive, requiring guarantees of resiliency
  • General-purpose applications
  • Cloud-native applications

It’s a short rehash of David Gouldenrsquo;s deck from yesterday – you can find my coverage of that here.

 

Cloud Strategy Discussion

There’s then a panel, moderated byMatt Baker (Senior Vice President, Strategy and Planning, Dell EMC), and comprised of:

  • David Goulden (President, Dell EMC)
  • Pat Gelsinger (CEO, VMware)
  • Rodney Rogers(CEO, Virtustream)
  • James Watters(Senior VP Product, Pivotal)

 

What have we learned from customers over the past decade?

DG: rsquo;Cloud isnrsquo;t a place, itrsquo;s where yoursquo;re doing things. ITaaS – thatrsquo;s the simple definition. Then the whole IT landscape is moving to a cloud operating model. Always have to marry it back to customersrsquo; applications. You need to have applications that enhance the business. Takecustomers on the journey.

PG: The pieces are really coming together. Virtualised compute, SDDC, CloudFoundation putting the pieces together. Converged and HCI. Validated designs and EHC. We have all the layers at a component level through to a complete integrated solution for private cloud that can be extended to the public cloud. University example where theyrsquo;re operating 50/50 public and private. Announced integration of vRealize with virtustream. Itrsquo;s now all realisable.
Developers are ultimately the end consumers of cloud.

JW: New set of workloads are happening. E.g. Bosch. IoT? NSX helped them, as did vSphere, EHC and PCF.
Rodney, you created a public cloud for demanding environments? How?

RR: Determined that the last thing the world needed was another sub-scale AWS. Solved a different engineering problem. Modernising the applications will happen eventually. No reason you canrsquo;t use automation and true cloud multi-tenancy for these applications. Break resources into highly granular components. Run higher utilisation per host, allowing pricing power of a public cloud. Still use throughput control providing latency guarantees.

 

What about the notion of locality? Flexible consumption options? Whatrsquo;s its role?

DG: Logical extension of cloud operating model. IT should be able to buy its infrastructure based on use too. Traditional models of acquisition and ownership are being challenged.

PG: NSX, heavily favoured subscription model. Huge bias that on-premises is perpetual. That is changing.
NSX is a perfect match for solving stumbling blocks to cloud adoption.Can I talk for an hour about this? We’ve seen an inflection point. Ability to move networking functions into software. Integral part of what theyrsquo;re doing.

JW: NSX integration is important in PCF. Figuring out this stuff is hard. NSX-T being integrated with PCF. Vital to application / platform approach.

PG: Cloud has been a leader in modern application development aspects.

JW: We thought we wanted cloud, what we really wanted was cloud applications.

 

What about the notion of community clouds. What role do verticals play in what yoursquo;re doing at virtustream?

RR: Mission-critical applications in cloud is relatively new space. Moniker associated with SAP, but run 1000s of applicationss. Starting to rsquo;verticaliserdquo; – introducing a healthcare vertical. rsquo;Hybrid-washing is the new cloud-washingrdquo;. Mission critical in the cloud will be a $25-30B segment. Federal, public sector …

 

[Questions from the floor]

Where does DevOps fit in?

JW: Continuous ops, deployment, update. Took an app-centric view of developers infrastructure.

PG: Developers wouldnrsquo;t go to a devops conference. Theyrsquo;re not motivated to be operations people. They want to automate stuff to get out of the business of ops. PCF is built for Day 2 operations. Developer to Operations is measured in minutes. Home Depot, Comcast operating at scale.

 

What are you thinking about public clouds? Yoursquo;re competing and working with them? Where do they really fit?

PG: Lead partners for Pivotal are AWS, IBM, Azure and Google. Cross-cloud strategy is helping them embrace the energy. The right answer is a hybrid, multi-cloud strategy moving forward. Are you going to bet on one cloud only?

 

Cost dynamics of that approach? Want to reduce the friction across those environments.

DG: Dell EMC will be the provider of the infrastructure. Be the company that solutions across Dell Technologies (e.g. EHC). On-premises marketplace is a huge opportunity. As much cooperative as it is competitive.

PG: No one else has these relationships with the big providers.

RR: We get this a lot. Itrsquo;s all about the use case. There are certain workloads that make the most sense to be placed in a hyper scale public cloud.

 

If Irsquo;m a developer for a new application today, I care about APIs and data, not infrastructure. Wersquo;re still playing with old technology – VMs / containers. Nothing like server less has emerged in the private space.

JW: The PCF paradigm – herersquo;s the code I want to run, the services I want to bind it to. Bringing Spring CloudFunction to the Pivotal stack.

PG: Our objective is to make infrastructure frictionless, regardless of location.
David was talking about differences between SMB and Enterprise. Given the differences, how are you guys approaching SMB from a cloud perspective. And how are they using it?

PG: Some SMB customers think theyrsquo;re just not at scale, so they donrsquo;t want to run infrastructure. Powerful GTM for Dell. Certain industries, itrsquo;s just not going to happen. Not one size fits all.

RR: Market segmentation will play significantly here.

 

People can be irrational. Enterprises are looking for a straightforward solution. What yoursquo;re collectively proposing may seem complex. What do you think?

PG: Initial uptake of public cloud was based on easy, not cost. Wersquo;ve now finished the easy stack in private. Whatrsquo;s my business model? Industry constraints? Cost options? Now you can pick the best of both.

RR: If yoursquo;re going to have a holistic solution, it is a challenge to simplify the message. Large business is where the technical idealist goes to die.

DG: Itrsquo;s not a terribly complicated matrix. You might just use Enterprise Hybrid Cloud and public cloud. In each of these segments, there are complete solutions.

MB: Itrsquo;s been much simplified, and we want to show you as an audience the complete solution. Partners and sales people make the choice a bit simpler when working with customers. Far simpler than it has been in the past.

 

Where do you think Boomi fits in this?

JW: PCF integration just announced.

DG: Havenrsquo;t met a customer who doesnrsquo;t need it.

RR: Working with Boomi to integrate with virtustream blueprint technologies and integrating into virtustream platform

And thatrsquo;s a wrap. I unfortunately missed the customer discussion betweenTom Roloff (Senior Vice President, Business & IT Transformation) andTed Newman (Head of Cloud Services, Royal Bank of Scotland) and the “Cloud Strategy Realized” panel with:

  • Barb Robidoux(Senior Vice President, Dell EMC Services marketing)
  • Chad Sakac(President, Converged Platforms, Dell EMC)
  • Ashley Gorakhpurwalla(President, Server Solutions Division, Dell EMC)
  • Steve Lalla(Senior Vice President, Commercial Client Software and Solutions, Dell)
  • Ray Orsquo;Farrell(CTO and Chief Development Officer, VMware)

 

Conclusion

Everyone says cloud (of whatever type) is hard. And they’re right. A few people made a big point about the focus on private cloud by Dell EMC in one of the keynotes this week. I think they’re missing the point though. Amazon will always tell you that public is best. And probably by 2025 when most applications are cloud-native, this will absolutely be true. But in the meantime, there are a shedload of enterprises and small businesses running legacy applications that don’t necessarily translate well to public cloud infrastructure. Or they can be serviced more efficiently in a private cloud scenario. I don’t have a problem with this approach at all. Dell EMC aren’t stupid. They have virtustream, and they’re working mighty hard to make sure their hybrid story is a good one. People get this idea that vendors have to be everything to everyone and when that doesn’t happen, they seem to get a bit upset. Public cloud is clearly a solid way forward for a lot of companies and their applications, but it’s not the only one. Just like not everyone is going to be a hyperscaler, not everyone is going to go all in onpublic cloud. I’m okay with that. And Dell EMC may change their mind next year too. If you’re lookingan alternative viewpoint, you could have a look at this article on El Reg. In any case, the part of the session I attended was informative. 4 stars.

Zimbra: Instalando Zimbra 8.7.x con un solo comando, incluye Chat y Drive

$
0
0

Saludos amigos, hoy os traigo una entrada corta, pero a la vez muy potente, ya os he contado muchas veces cómo instalar Zimbra Collaboration de forma manual:

  • Zimbra: Instalando Zimbra 8.7.6 sobre Ubuntu 14.04 – ¡con Chat y Drive!
  • Zimbra: Instalando Zimbra 8.7.1 sobre Ubuntu 16.04 LTS

Pero hoy os traigo, de nuevo, el script para instalar Zimbra de manera sencilla y automatizada, se trata de ZimbraEasyInstall actualizado para Zimbra Collaboration 8.7.7, que incluye Chat y Drive. Esto quiere decir que en menos de 5 minutos podéis tener todo el poder de Zimbra para Email, con Chat e integración con Nextcloud/Owncloud ¡increíble!

Requisitos de Hardware y Software

Para instalar Zimbra tenemos que tener los siguientes recursos recomendados de Hardware:

  • CPU – Intel/AMD 64-bit CPU 1.5 GHz
  • RAM – 8GB de RAM
  • HDD – 5 GB de espacio para software y logs, para conocer el espacio en disco para Zimbra, usa la calculadora que tenéis aquí – https://jorgedelacruz.es/zimbraform/
  • RAID-5 no es recomendado para instalaciones con más de 100 cuentas

Para Software, ZimbraEasyInstall está diseñado para Ubuntu 14.04 LTS o Ubuntu 16.04 LTS, elegir el que más os guste.

Instalación de Zimbra Collaboration 8.7.7 con Drive y Chat con un solo click

Si ya tenemos el servidor Linux instalado, es el momento de lanzar ZimbraEasyInstall, aquí los sencillos pasos, para instalar correctamente tenemos que saber algunos datos básicos:

  • Dominio que queremos que Zimbra use, puede ser interno por ejemplo dominio.local, pero os recomiendo usar un dominio real, para que los usuarios que se crean sean usuario@dominio.es por ejemplo
  • La IP local del server, se puede saber con ifconfig
  • La password para el usuario admin, y otros lugares de Zimbra

wget https://github.com/jorgedlcruz/zimbra-automated-installation/raw/master/ZimbraEasyInstall-87chmod +x ZimbraEasyInstall-87./ZimbraEasyInstall-87 dominio.es 192.168.100.100 Zimbra2017

Veremos algo como lo siguiente una vez que el proceso ha terminado correctamente pasados unos minutos:

You can access now to your Zimbra Collaboration ServerAdmin Console: https://IPSERVER:7071Web Client: https://IPSERVER

Y si nos vamos al Web Client veremos que todo ha ido bien y tenemos ya Zimbra Collaboration, Chat y Drive, sin complicados pasos de instalación.

Links de interés

Me gustaría dejaros aquí algunos links que creo que pueden ser de interés para vosotros:

  • https://github.com/jorgedlcruz/zimbra-automated-installation
  • Zimbra: Amazon Lightsail para instalar Zimbra Collaboration 8.7.1, Parte I
  • Zimbra: Amazon Lightsail para instalar Zimbra Collaboration 8.7.1, Parte II
  • Zimbra: Instalando Zimbra Collaboration en Cloud (DigitalOcean)

Muchas gracias por la lectura

La entrada Zimbra: Instalando Zimbra 8.7.x con un solo comando, incluye Chat y Drive aparece primero en El Blog de Jorge de la Cruz.

Nutanix 5.0.2 ya está aquí

$
0
0
Hoy voy a hablaros de nuevo sobre la maravillosa tecnología de NUTANIX de la cual ya he hecho referencia en otros posts anteriores. En este post concretamente hablaremos sobre las características más importantes de la última Release de NUTANIX 5.0.2. Aprovechando el post os comentaré también las buenas prácticas a la horade hacer un upgrade de versión en NUTANIX para obtener siempre los mejores resultados en este proceso.

CentOS 6.8 基礎設定 (7) - 簡述 YUM 套件管理工具

$
0
0

前言

最近工作關係開始又要回味 CentOS 了,在本次實作環境中採用的是 CentOS-6.8-x86_64-minimal.iso 映像檔,也就是 CentOS 6.8 最小化安裝版本 (Minimal Install)。為何不用最新的 CentOS 6.9 版本? 因為,最新的 LIS 4.1.3-2 僅支援至 CentOS 6.8,所以便以 CentOS 6.8 版本開始回味起了,那麼開始來玩玩吧。



實作環境

  • Windows Server 2016 Hyper-V
  • CentOS 6.8 x86-64 (Kernel version 2.6.32-642.el6)



YUM 套件管理工具

絕大部份的開放原始碼軟體皆採用 Tarball 的形式進行發布,而在 Linux 上為了解決使用 Tarball 必須要解壓縮、檢測 (./configure)、編譯 (make)、安裝 (make install)等繁鎖步驟,因此發展出 RPM (The RPM Package Manager) 來簡化整個安裝流程。雖然 RPM 安裝機制簡化了整個安裝流程但卻無法解決套件相依性及套件相衝突的問題,舉例來說您可能安裝 A RPM 時系統顯示您必須要先安裝 B RPM(套件相依性),而當您下載及安裝 B RPM 時又說需要安裝 C RPM(套件相依性),當您好不容易又下載及安裝 C RPM 時卻出現此 RPM 跟 A RPM 互相衝突,碰到這種情況時在以往您只能手動排除這種套件衝突的狀況了。

YUM (Yellow dog Updater Modified) 套件管理工具便是解決上述 RPM 套件相依性及相衝突的問題而發展出來的套件管理解決方案。此套件管理工具能從指定的套件伺服器上自動下載相對應的 RPM 套件包至系統進行安裝,並且當出現套件相依性時能自動下載及安裝相關聯的 RPM 套件,同時會盡量避免發生套件衝突的情況。YUM 能夠有效簡化軟體套件安裝流程並解決惱人的套件相依性及相衝突的問題,使得軟體套件在安裝、移除、升級程序上變得非常容易。

預設 YUM 下載套件的來源伺服器為國外網站,我們可以透過修改 YUM 設定檔 「/etc/yum.repos.d/CentOS-Base.repo」將下載套件的鏡像網站指定至台灣境內各所大學或機構。目前台灣可以使用的鏡像網站約有 11 個(如下所示),請您依個人網路狀況選擇較適合您的網路環境進行設定以便加快套件下載速度,或者參考 CentOS 鏡像網站清單選擇位於您國家內的鏡像網站:

  • 樹德科技大學: http://ftp.stu.edu.tw/Linux/CentOS/
  • 元智大學: http://ftp.yzu.edu.tw/Linux/CentOS/
  • 義守大學: http://ftp.isu.edu.tw/pub/Linux/CentOS/
  • 崑山科大: http://ftp.ksu.edu.tw/pub/CentOS/
  • 國家高速網路與計算中心: http://ftp.twaren.net/Linux/CentOS/
  • 南臺科大: http://ftp.stust.edu.tw/pub/Linux/CentOS/
  • 臺中市政府教育局: http://ftp.tc.edu.tw/Linux/CentOS/
  • 靜宜大學: http://ftp.cs.pu.edu.tw/Linux/CentOS/
  • 中山大學: http://ftp.nsysu.edu.tw/CentOS/
  • Hinet IDC: http://mirror01.idc.hinet.net/CentOS/
  • 交通大學: http://centos.cs.nctu.edu.tw/


下列操作步驟中,我們將 YUM 設定檔內鏡像網站由預設國外站台修改為國內的 Hinet IDC
# cd /etc/yum.repos.d/
# cp CentOS-Base.repo CentOS-Base.repo.bak
# sed -i 's,mirror.centos.org/centos,mirror01.idc.hinet.net/CentOS,g' CentOS-Base.repo


上述設定完成後您便可以開始使用 YUM 配合相關指令管理套件,但是在開始以前建議確認 CentOS 主機時間是否正確,以免後續管理相關套件時,因為本機系統時間與 YUM 鏡像網站時間差異過大造成不可預期的錯誤。下列條列出使用 YUM 套件管理工具時,常常會使用到的指令及相關參數意義:

  • yum check-update: 套件更新檢查,將目前系統上安裝的套件與 YUM 鏡像網站進行檢查比對後列出需要更新套件的清單。
  • yum update:套件更新,檢查及比對系統需要套件更新的清單後詢問您是否要更新套件,您可以配合參數 –y 對所有詢問一律回答 yes 來允許所有套件更新。
  • yum install <套件名稱>:安裝套件,執行從 YUM 鏡像網站下載指定套件並進行安裝,收集相關資訊後會詢問您是否確定要安裝,您可以配合參數 –y 對所有詢問一律回答 yes 來安裝指定套件及其相依性套件。
  • yum remove <套件名稱>:移除套件,移除您指定的套件名稱,收集相關資訊後會詢問您是否確定要移除該套件,您可以配合參數 –y 對所有詢問一律回答 yes 來移除指定的套件及相依性套件。
  • yum clean all:清除暫存資料,清除使用 YUM 套件管理工具下載 RPM 進行安裝時的暫存檔案。
  • yum search <套件名稱或關鍵字>:搜尋套件,您可使用已經知道的套件名稱或者有關於套件的關鍵字來進行搜尋的動作。
  • yum list:顯示可安裝套件清單,顯示您指定的 YUM 鏡像網站中所支援安裝的所有套件名稱。
  • yum info <套件名稱>:套件資訊,顯示您指定的套件其詳細資訊,例如適用平台、套件版本、套件大小、套件功能描述、套件授權資訊、套件官方網址等資訊。
  • yum grouplist:顯示可安裝的套件群組清單,顯示您指定的 YUM 鏡像網站中所支援安裝的所有套件群組名稱。
  • yum groupinstall <套件群組名稱>:安裝套件群組,執行從 YUM 鏡像網站下載指定套件群組中相關套件並進行安裝,收集套件群組相關資訊後會詢問您是否確定要安裝,您可以配合參數 –y對所有詢問一律回答 yes 來安裝指定套件及其相依性套件。
  • yum groupremove <套件群組名稱>:移除套件群組,移除您指定的套件群組,並且在系統收集相關資訊後,會詢問是否確定要移除該套件群組中所有套件,您可以配合參數 –y 對所有詢問一律回答 yes 來移除指定的套件群組。
  • yum groupinfo <套件群組名稱>:查詢套件群組資訊,查詢指定的套件群組資訊及功能描述,並且將顯示此套件群組中預設會安裝的套件清單 (Default Packages)、強制安裝的套件清單 (Mandatory Packages)、選擇安裝的套件清單 (Optional Packages)。


由於 YUM 套件管理工具實際上也是幫助我們對 RPM 套件包進行管理的工作,其實底層的安裝、移除、升級等動作仍是使用 RPM 套件,因此我們仍可以使用 rpm 指令來幫助我們了解及管理套件,例如,使用 rpm 指令來了解已安裝的 IPTables 套件、設定檔及服務啟動檔在哪裡。
# rpm -qa iptables  //查詢套件版本
iptables-1.4.7-16.el6.x86_64
# rpm -qc iptables  //列出套件設定檔
/etc/sysconfig/iptables-config
# rpm -ql iptables  //列出套件所有檔案
/bin/iptables-xml-1.4.7
/etc/rc.d/init.d/iptables
/etc/sysconfig/iptables-config
/lib64/libip4tc.so.0-1.4.7
/lib64/libip4tc.so.0.0.0-1.4.7
...略...

圖、使用 rpm 指令查詢
Viewing all 50538 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>