使用Ambari管理你的大数据平台

February 4th, 2016 by free2way No comments »

作为hortonworks公司,也算是大数据领域业界文明的公司了,它的Ambari也是Apache的开源产品。作为企业级的大数据管理平台,对于企业级的大数据应用,还是比较方便的,我觉得不在CDH系列之下。

Ambari安装非常简单方便的。

wget http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.2.0.0/ambari.repo

yum list|grep ambari

yum install ambari-server

ambari-server setup

就完成安装了。然后通过Ambari-setup完成初始化设置后,使用Ambari-server start命令启动即可

login

 

ambari

 

hdfs

简单的Hadoop2.6 单机伪分布部署

May 30th, 2015 by free2way No comments »

Hadoop单机伪分布的部署比较简单,用在虚拟机里面作为一个简单的参考环境还是可以的:
1.获得Hadoop 2.6的包,在Linux里面解压
2.修改/etc/profile

export JAVA_HOME=/usr/java/jdk1.7.0_79
export HADOOP_PREFIX=/hadoop/hadoop-2.6.0
export CLASSPATH=".:$JAVA_HOME/lib:$CLASSPATH"
export PATH="$JAVA_HOME/bin:$PATH:$HADOOP_PREFIX/bin:$PATH"
export HADOOP_PREFIX PATH CLASSPATH
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop

3.修改核心配置文件

<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://lsn-linux:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>192.168.82.101:9001</value>
</property>

4.hdfs namenode -format
5.启动DFS和Yarn
6.校验

# hdfs dfsadmin -report
Configured Capacity: 28013621248 (26.09 GB)
Present Capacity: 21471326208 (20.00 GB)
DFS Remaining: 21471301632 (20.00 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

————————————————-
Live datanodes (1):

Name: 192.168.82.101:50010 (hadoop)
Hostname: hadoop
Decommission Status : Normal
Configured Capacity: 28013621248 (26.09 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 6542295040 (6.09 GB)
DFS Remaining: 21471301632 (20.00 GB)
DFS Used%: 0.00%
DFS Remaining%: 76.65%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat May 30 23:06:53 CST 2015

7.访问http://host:8088

 

一个Puppet的Cert错误

May 2nd, 2015 by free2way No comments »

本来PSSH一直用的好好的,但是为了适应异构平台,所以决定慢慢转到Puppet上面。这玩意毕竟有学习成本,刚上手就貌似碰到一个Cert的问题:

Agent Node在新加入的时候,执行Puppet agent –test, 出来以下信息:

root@client:~# puppet agent --test
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for client.puppet.local
Info: Certificate Request fingerprint (SHA256): E2:7A:54:96:4F:29:83:44:1C:79:B8:F8:16:11:67:28:67:F2:63:C6:84:1D:88:2B:65:18:E5:DA:DF:57:37:E9
Exiting; no certificate found and waitforcert is disabled

Exiting; no certificate found and waitforcert is disabled。奇怪,核对Master的主机名解析,在Master上面使用Puppet cert list也能看到新节点的证书,NTP也没有问题。后来发现这台机器的机器名以前没有用完整的域名Full DNS Name,被改了一遍主机名。

只好重新生成一遍证书:

root@master:~# puppet cert clean client.puppet.local

root@client:/home# find /var/lib/puppet/ssl -name client.puppet.local.pem -delete

root@master:~# puppet cert sign client.puppet.local

root@client:/home# puppet agent -t
Info: Caching certificate for client.puppet.local
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for client.puppet.local
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for client.puppet.local
Info: Applying configuration version '1430578050'
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 0.03 seconds

 
问题解决

KVM和QEMU的区别

January 20th, 2015 by free2way No comments »

转一篇Guillermo Alvarado写的文章,对KVM和QEMU的区别。言简意赅说的很清楚

UNDER THE HOOD, KVM AND QEMU WORKING TOGETHER

There are a lot of questions about how KVM and QEMU works… What is the difference between them? How they work together? How is doing these technologies to achieve a powerful experience in virtualization? You need to read this.

KVM

KVM ( Kernel-based Virtual Machine) is a virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V).  The kernel component of KVM is included in mainline Linux, as of 2.6.20. KVM is open source software. KVM has played a key role in the open source Linux based virtualization environment.

QEMU

QEMU (Quick EMUlator) is a generic and open source machine emulator and virtualizer. Is a free and open-source hosted hypervisor that performs hardware virtualization  QEMU emulates central processing units through dynamic binary translation and provides a set of device models, enabling it to run a variety of unmodified guest operating systems

Working together

So In real hardware, the Operating System translates programs into instructions that are executed by the physical CPU. In a virtual machine is the same but the difference is that the Virtual CPU is virtualized by the hypervisor and the  hypervisor needs to translate the instructions for the Virtual CPU and convert it into instructions for the physical CPU. This translation has a big performance overhead.

To minimize this overhead,processors support virtualization extensions. Intel support a technology called VT-x and the AMD equivalent is AMD-V. Using these, a slice of physical CPU can be directly mapped to the Virtual CPU. So the instructions or the Virtual CPU can be directly executed the physical CPU slice.

KVM is the Linux kernel module that enables this mapping of physical CPU to Virtual CPU. This mapping provides the hardware acceleration for Virtual Machine and boosts its performance. In fact  QEMU uses this acceleration when Virt Type is chosen as KVM. When working together, KVM access irectly to the CPU and memory, and QEMU emulates the hardware resources like hard disk, video, USB, etc,

If your server CPU does not support virtualization extension, you will not be able to use KVM and  it is the job of the hypervisor to execute the Virtual CPU instruction using translation. QEMU uses TCG or Tiny Code Generator to optimally translate and execute the Virtual CPU instructions on the physical CPU but you will have a performance overhead.

Conclusion

KVM needs Qemu (emulator) for full hypervisor functionality. QEMU is self-sufficient and  KVM is really a Linux kernel module for exploiting VT-extenstions to act as driver for the physical CPU capabilities.

So, we can say QEMU needs KVM’s performance boost and  KVM by itself cannot provide the complete virtualization solution, it needs QEMU.

 

使用shipyard管理docker

September 2nd, 2014 by free2way No comments »

Shipyard提供了一个WEB UI的方式来管理你的Docker.

Install

在已经部署了Docker的机器上面,直接安装Shipyard:

root@kvm:~# docker run -i -t -v /var/run/docker.sock:/docker.sock shipyard/deploy setup
.........................
5b18c104c09f: Download complete
d7b96d16df63: Download complete
cd1bd7a9e8c2: Download complete
eba9b5f1d1d1: Download complete
08852c160ec2: Download complete
2cbf6e5024d8: Download complete
Shipyard Stack Deployed

You should be able to login with admin:shipyard at http://<docker-host-ip>:8000

访问http://<docker-host-ip>8000就可以看到登录界面:
1

 

 

 

 

 

 

继续注册shipyard agent后就可以对本机上的docker容器进行管理:

1

在Vmware workstation搭建Cloudstack 4.4实验环境

August 31st, 2014 by free2way No comments »

环境:在笔记本上创建两台Vmware workstaion。都是单网卡
安装步骤
在两台Vmware上面安装CentOS 6.5
添加 cloudstack 软件包的官方源,安装 ntp, cloudstack-management, mysql 数据库服务器, Cloudstack 4.4的yum源如下:

[cloudstack]
name=cloudstack
baseurl=http://cloudstack.apt-get.eu/rhel/4.4/
enabled=1
gpgcheck=0

安装 ntp, cloudstack-management, mysql 数据库服务器:

# yum update
# yum install ntp
# yum install cloudstack-management
# yum install mysql-server

修改 SELINUX 设置,配置防火墙允许Mysql、NFS端口。在测试环境里面完全可以Disable安全策略

使用 cloudstack-setup-databases 初始化 ClouStack 数据库,完成后运行 cloudstack-setup-management:

cloudstack-setup-databases cloud:cloud@localhost --deploy-as=root: -i 192.168.82.18

配置NFS

[root@cloud1 yum.repos.d]# cat /etc/exports
/export *(rw,async,no_root_squash,no_subtree_check)

mount -t nfs 192.168.82.18:/export/secondary /mnt/secondary
mount -t nfs 192.168.82.18:/export/primary /mnt/primary

下载模板:

./cloud-install-sys-tmplt
-m /mnt/secondary
-u http://cloudstack.apt-get.eu/systemvm/4.4/systemvm64template-4.4.0-6-kvm.qcow2.bz2
-h kvm
-s
-F

在计算节点安装Cloudstack agent,依然适用yum的方式

最后用浏览器访问http://192.168.82.18:8080/client.默认用户名和密码是admin/password:

1

1

Docker的初步安装

August 28th, 2014 by free2way No comments »

Docker的安装还是比较简单的:

安装最新版本的Ubuntu包

$ sudo apt-get update
$ sudo apt-get install docker.io
$ sudo ln -sf /usr/bin/docker.io /usr/local/bin/docker
$ sudo sed -i '$acomplete -F _docker docker' /etc/bash_completion.d/docker.io

添加docker镜像秘钥到你的本地秘钥库

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9

添加Docker镜像添加你的apt软件源,更新和安装lxc-docker包

$ sudo sh -c "echo deb https://get.docker.io/ubuntu docker main /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update
$ sudo apt-get install lxc-docker

确认它是否正常工作:

$ sudo docker run -i -t ubuntu /bin/bash

结果如下:

kvm@kvm:~$ sudo docker run -i -t ubuntu /bin/bash
Unable to find image 'ubuntu' locally
Pulling repository ubuntu


c4ff7513909d: Download complete
511136ea3c5a: Download complete
1c9383292a8f: Download complete
9942dd43ff21: Download complete
d92c3c92fa73: Download complete
0ea0d582fd90: Download complete
cc58e55aa5a5: Download complete

运行一个 Hello world

kvm@kvm:~$ sudo docker run -i -t ubuntu echo hello world
hello world

CentOS 6.5上安装Openvswith

August 19th, 2014 by free2way No comments »

1.确保本机autoconf版本在2.64以上

tar xvf autoconf-2.64.tar.gz

cd autoconf-2.64/

./configure make make install

2.安装OpenVswitch

wget http://openvswitch.org/releases/openvswitch-2.3.0.tar.gz

tar zxvf openvswitch-2.3.0.tar.gz cd openvswitch-2.3.0/

./boot.sh;./configure; make dist

mkdir /root/rpmbuild/SOURCES

cp openvswitch*.tar.gz

/root/rpmbuild/SOURCES/

rpmbuild -bb rhel/openvswitch.spec

rpmbuild -bb rhel/openvswitch-kmod-rhel6.spec

vi openvswitch-kmod-rhel6.spec

加入:

%files
%defattr(-,root,root,-)
/etc/depmod.d/openvswitch.conf

rpm -ivh /root/rpmbuild/RPMS/x86_64/*.rpm

[root@cloudy x86_64]# /etc/init.d/openvswitch start

/etc/openvswitch/conf.db does not exist … (warning).

Creating empty database /etc/openvswitch/conf.db [  OK  ]

Starting ovsdb-server [  OK  ]

Configuring Open vSwitch system IDs [  OK  ]

Inserting openvswitch module [  OK  ]

Starting ovs-vswitchd [  OK  ]

Enabling remote OVSDB managers [  OK  ]

安装完毕!

使用Vmware来搭建DB2 purescale平台

August 3rd, 2014 by free2way No comments »

最近花了一些时间在这方面。其实purescale在国内实际用的不算太多。只听说交行和某个烟草企业用到这个。DB2 Purescale平台粗看和Oracle RAC架构类似。仔细一看发现还是沿用了不少DB2 ZOS平台技术。在X86架构和分布式横行的今天,这也可以看作IBM对于目前主流技术潮流的一种妥协。毕竟ZOS架构目前集中在银行,保险和少量高端制造业。而Power平台的DB2 UDB在目前数据库市场份额一路下滑。总的来说IBM还是转身太慢。

回到Purescale技术本身,OS可以是Linux或者AIX,在我的环境里面我使用了SUSE linux 11sp3。分布式方面使用IBM自有的分布式共享文件系统GPFS。高可用性用的是TSA(话说这玩意,我实在用不惯)。在整个测试环境里面我创建了4个VMWARE虚拟机。其中一台虚拟机创建iSCSI设备共享给其余3个节点使用。这3个节点里面,两个是Member,一个是CF节点。DB2的版本是10.5. 有机会在上面测试一下BLU。

» Read more: 使用Vmware来搭建DB2 purescale平台

OpenStack Services AWS services对比

June 15th, 2014 by free2way No comments »

png;base6418a10eb8658f5507