3D ( dee 在泰語式很好很棒的意思。
一,是以正面的態度來看待世界。
無論如何都不先急著抱怨。
二,是把你的份內工作做好,這樣才不會影響其他人。
三,把身體照顧好。
Tools needed to build Ambari
script如下,
peicheng/ambari_build_env · GitHub
https://github.com/peicheng/ambari_build_env
#------------------------------
#! /bin/bash
#install JDK 1.6 zzz
#apache maven
wget http://mirror.cc.columbia.edu/pub/software/apache/maven/maven-3/3.0.4/binaries/apache-maven-3.0.4-bin.tar.gz
sudo tar xzf apache-maven-3.0.4-bin.tar.gz -C /usr/local
cd /usr/local
sudo ln -s apache-maven-3.0.4 maven
echo "export M2_HOME=/usr/local/maven" >> /etc/profile
echo "export PATH=${M2_HOME}/bin:${PATH}" >> /etc/profile
source /etc/profile
#setuptools
cd ~
wget http://pypi.python.org/packages/2.6/s/setuptools/setuptools-0.6c11-py2.6.egg#md5=bfa92100bd772d5a213eedd356d64086
sh setuptools-0.6c11-py2.6.egg
#node.js brunch
yum install -y openssl-devel gcc-c++ gcc
cd /usr/local/src/
wget -N http://nodejs.org/dist/node-latest.tar.gz
tar xzvf node-latest.tar.gz
cd node-*
make
make install
npm install -g brunch@1.5.3
#------------------------------
git clone git://git.apache.org/ambari.git
cd ambari
#RHEL/CentOS 6:
mvn -X -B -e clean install package rpm:rpm -DskipTests -Dpython.ver="python >= 2.6"
Ambari Server RPM will be created under AMBARI_DIR/ambari-server/target/rpm/ambari-server/RPMS/noarch.
Ambari Agent RPM will be created under AMBARI_DIR/ambari-agent/target/rpm/ambari-agent/RPMS/x86_64.
NOTE: Run everything as root below.
Cf.
Ambari Development - Apache Ambari (Incubating) - Apache Software Foundation
https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Development
近期在看 ambari REST API ,與ambari server 互動的這個區塊。
很多設計都是基於原本framework所提供的特性去設計的。
在REST API 這層中, 大量的使用了,JAX-RS 來省掉一些複雜的過程。
/**
* Handles POST /clusters/{clusterID}/hosts/{hostID}
* Create a specific host.
*
* @param body http body
* @param headers http headers
* @param ui uri info
* @param hostName host id
*
* @return host resource representation
*/
@POST
@Path("{hostName}")
@Produces("text/plain")
public Response createHost(String body, @Context HttpHeaders headers, @Context UriInfo ui,
@PathParam("hostName") String hostName) {
return handleRequest(headers, body, ui, Request.Type.POST,
createHostResource(m_clusterName, hostName, ui));
}
javax.ws.rs.core.Response | createHost(java.lang.String body, javax.ws.rs.core.HttpHeaders headers, javax.ws.rs.core.UriInfo ui, java.lang.String hostName) Handles POST /clusters/{clusterID}/hosts/{hostID} Create a specific host.以django , web.py來解釋的話, |
可以透過這個機制去取得REST的參數,作為程式內部的變數。
如上是一個createHost的method,
(上面有包了一層取 cluseterID,先不看他。)
JAX-RS @PathParam example
http://www.mkyong.com/webservices/jax-rs/jax-rs-pathparam-example/
Terminology
Service
Service refers to services in the Hadoop stack. HDFS, HBase, and Pig are
examples of services. A service may have multiple components (e.g., HDFS has
NameNode, Secondary NameNode, DataNode, etc). A service can just be a client
library (e.g., Pig does not have any daemon services, but just has a client library).
Component
A service consists of one or more components. For example, HDFS has 3
components: NameNode, DataNode and Secondary NameNode. Components may
be optional. A component may span multiple nodes (e.g., DataNode instances on
multiple nodes).
Node/Host
Node refers to a machine in the cluster. Node and host are used interchangeably
in this document.
Node-Component
Node-component refers to an instance of a component on a particular node. For
example, a particular DataNode instance on a particular node is a node-component.
Operation
An operation refers to a set of changes or actions performed on a cluster to satisfy
a user request or to achieve a desirable state change in the cluster. For example,
starting of a service is an operation and running a smoke test is an operation. If a
user requests to add a new service to the cluster and that includes running a smoke
test as well, then the entire set of actions to meet the user request will constitute an
operation. An operation can consist of multiple “actions” that are ordered (see
below).
Task
Task is the unit of work that is sent to a node to execute. A task is the work that
node has to carry out as part of an action. For example, an “action” can consist of
installing a datanode on Node n1 and installing a datanode and a secondary
namenode on Node n2. In this case, the “task” for n1 will be to install a datanode
and the “tasks” for n2 will be to install both a datanode and a secondary namenode.
Stage
A stage refers to a set of tasks that are required to complete an operation and are
independent of each other; all tasks in the same stage can be run across different
nodes in parallel.Action
An ‘action’ consists of a task or tasks on a machine or a group of machines. Each
action is tracked by an action id and nodes report the status at least at the
granularity of the action. An action can be considered a stage under execution. In
this document a stage and an action have one-to-one correspondence unless
specified otherwise. An action id will be a bijection of request-id, stage-id.
Stage Plan
An operation typically consists of multiple tasks on various machines and they
usually have dependencies requiring them to run in a particular order. Some tasks
are required to complete before others can be scheduled. Therefore, the tasks
required for an operation can be divided in various stages where each stage must be
completed before the next stage, but all the tasks in the same stage can be
scheduled in parallel across different nodes.
Manifest
Manifest refers to the definition of a task which is sent to a node for execution. The
manifest must completely define the task and must be serializable. Manifest can also
be persisted on disk for recovery or record.
Role
A role maps to either a component (e.g., NameNode, DataNode) or an action
(e.g., HDFS rebalancing, HBase smoke test, other admin commands, etc.)
[hadoop]apache ambari design goals
Design Goals
Platform Independence
The system must architecturally support any hardware and operating system, e.g.
RHEL, SLES, Ubuntu, Windows, etc. Components which are inherently dependent
on a platform (e.g., components dealing with yum, rpm packages, debian packages,
etc) should be pluggable with well-defined interfaces.
Pluggable Components
The architecture must not assume specific tools and technologies. Any specific
tools and technologies must be encapsulated by pluggable components. The
architecture will focus on pluggability of Puppet and related components which is a
provisioning and configuration tool of choice, and the database to persist the state.
The goal is not to immediately support replacements of Puppet, but the architecture
should be easily extensible to do so in the future.
The pluggability goal doesn’t encompass standardization of inter-component
protocols or interfaces to work with third-party implementations of components.
Version Management & Upgrade
Ambari components running on various nodes must support multiple versions of
the protocols to support independent upgrade of components. Upgrade of any
component of Ambari must not affect the cluster state.
Extensibility
The design should support easy addition of new services, components and APIs.
Extensibility also implies ease in modifying any configuration or provisioning steps
for the Hadoop stack. Also, the possibility of supporting Hadoop stacks other than
HDP needs to be taken into account.
Failure Recovery
The system must be able to recover from any component failure to a consistent
state. The system should try to complete the pending operations after recovery. If
certain errors are unrecoverable, failure should still keep the system in a consistent
state.
Security
The security implies 1) authentication and role-based authorization of Ambari
users (both API and Web UI), 2) installation, management, and monitoring of the
Hadoop stack secured via Kerberos, and 3) authenticating and encrypting over-thewire communication between Ambari components (e.g., Ambari master-agent
communication).
Error Trace
The design strives to simplify the process of tracing failures. The failures should
be propagated to the user with sufficient details and pointers for analysis.ear Real-Time and Intermediate Feedback for Operations
For operations that take a while to complete, the system needs to be able to
provide the user feedback with intermediate progress regarding currently running
tasks, % of operation complete, a reference to a operation log, etc., in a timely
manner (near real-time). In the previous version of Ambari, this was not available
due to Puppet’s Master-Agent architecture and its status reporting mechanism.
$ git pull
$ git push -u origin master
or
$ git fetch
$ git merge
$ git push -u origin master
import com.google.inject.Guice;
import com.google.inject.Inject;
import com.google.inject.Injector;
final Config config1 = injector.getInstance(ConfigFactory.class).createNew(cluster, "t1",
new HashMap<String, String>() {{
put("prop1", "val1");
}});
config1.setVersionTag("1");
config1.persist();
cf.
guice源代码分析(一)injector.getInstance - 编程思索 | Thoughts of Coding
http://tocspblog.appspot.com/?p=54001
最近常常看這樣樣形式的 LinkedHashMap 用法
繼承 hashmap 以獲得 put method。
Methods inherited from class java.util.HashMap clone, containsKey, entrySet, isEmpty, keySet, put, putAll, remove, size, values
Map<String, Object> properties = new LinkedHashMap<String, Object>();
若要得到有序的hashtable便可以使用這個ADT,
另外可參照 treehashmap,hashmap
cf.
Difference between HashMap, LinkedHashMap and SortedMap in java - Stack Overflow
http://stackoverflow.com/questions/2889777/difference-between-hashmap-linkedhashmap-and-sortedmap-in-java
老紫竹JAVA提高教程(14)-认识Map之LinkedHashMap - 老紫竹的专栏 - 博客频道 - CSDN.NET
http://blog.csdn.net/java2000_net/article/details/3741565
深入Java集合学习系列:LinkedHashMap的实现原理 - 莫等闲 - ITeye技术网站
http://zhangshixi.iteye.com/blog/673789
LinkedHashMap (Java Platform SE 6)
http://docs.oracle.com/javase/6/docs/api/java/util/LinkedHashMap.html
旅行,写作,编程 | 外刊IT评论网
http://www.aqee.net/traveling-writing-programming/
概括起来,今年到目前为止,我所做的事情包括:
那么,让我从一年前开始,那是2010年9月,我刚好从一个我合作创办的公司里出来,尽管这段经历是很有价值的,但无休无止的长时间苦干让我精疲力尽。我回到了英格兰,需要对未来做一些思考。我一直有一个梦想——移居美国(几年就好),所以,我在Google记事本上写了下面的话:
人生的选择:
去纽约哥伦比亚大学深造
坏处 - 非常昂贵,并不一定能学到什么真正有用的东西,无聊?
好处 - 那是一个纽约的大学!
写一本书,申请 01 签证
坏处 - 需要大量的时间,有风险
好处 - 对事业有好处,有趣
等待。去纽约度一次假(3个月)。等待创业签证。
很容易 - 不是那么有趣
也许选第二个,不行就选3?
最终我选择了2,我已经对JavaScript web应用研究了很久,我要写一本这方面的书,为什么不边做环游世界的旅行、边写书呢?这也是我一个梦想呀。我从oneworld买了一份环游世界的机票(比你们想象的要便宜),决定下周去我的第一站,南非。
如果你从来没有到过非洲,你应该去一次。那里的景色原始而美丽,对那些没有体验过这种景色的人,你很难用言语描绘明白。几年前我就喜欢上了南方,那时我在东海岸做了一个为期3个月的冲浪旅行。这次,我只有一个月的时间,穿越特兰斯凯,从开普敦到德班。当我在南非旅行时,我的写作也开始了,把早期向O’Reilly提交的书的框架里的数章填充了材料。
特兰斯凯是南非非常具有乡野特色的地方,到处是连绵的小山,一些小村庄和土堆的茅屋。他们仍然沿袭着酋长制度,有一个首领,大多数的当地人靠捕鱼为生。我们在高低不平的土路上颠了两天才到达我心仪的地方,一个美丽的海湾,叫做咖啡湾(Coffee Bay)。在那里,我休整了一下,从网上下载了一些相关资料,为更远的海湾远征做准备。
我还清晰的记得我们走了数里地来到那个未开垦的海滩,我们从那些一个个被黄沙和小丘孤立的村庄穿行而过。有一个地方,我们要过一条大河,我们需要游过去,我把背包举过头顶,以免里面的相机和iPod遇到水。非洲是一个让你脱离尘世的地方,解放你的思想,重新认识人生最重要的东西是什么。
下一站是香港,在那里,我度过了我的21岁生日,接着,我从陆路由新加坡到越南河内。很多人不相信香港70%的面积由自然公园覆盖,我徒步走了几条精彩的景观路线,非常的精彩壮观,比如:香港龙脊。有几天,我在boot.hk这个网站上闲逛,这是一个协作工作的网站,我顺便教了一个同行的游客如何使用ruby。然后,到了夜里,我跟Soho里的一些冲浪爱好者狂欢到凌晨。
从泰国到柬埔寨到越南是我这次旅行中做喜欢的部分,如果你从没有到过亚洲,你绝对应该去一次。这些国家非常的漂亮,气候非常的好,食物美味可口,人们非常友善。吴哥窟是世上最神奇的地方之一,每个人都应该去看看。是Trey Ratcliff的照片把我吸引到了那里,我的很多其它旅游目的地也是受了他的影响。那个家伙是很多旅游地的第一宣传者。
在一些无名的小博客中,我听有人说过一个很远的美丽的小岛,在柬埔寨的海边。说小岛的Sihanoukville这个地方有个酒吧,说只能坐小渔船到那里。我,还有几个非常好的朋友,乘坐晚上的大巴,开始寻找这个传说中的酒吧。搜索差不多进行了一整天,每一个问过的酒吧都把我们指向另外一个酒吧。最终,我们问了出来,并在第二天早晨做短程巴士去了那个地方。
上面的照片上是海岸边一个10美元一晚的小木屋。从当地居民区离开后,我们的队伍像小岛上唯一的人,我们随性自由的奔跑。白天我们懒懒的躺在海滩上,吃着岛上厨师准备的鲜美可口的水果沙拉,在夜晚,我们在到处是浮游生物的海里游泳。
下一站是越南,我们沿着湄公河支流来到一个边界上的小镇,我们是这里唯一的西方人,交流成了最大的问题。幸运的是,我们发现一个也许是镇上唯一会说英语的人,他骑车当我们的向导。当我的信用卡被那里的一个自动取款机吞掉了后,他提供了我很大的帮助!
我们的队伍分成了几路,在我到达越南时,我的书正在按计划完成,进行的非常顺利。此时,我在西贡多待了几周,让我在书的好几章上有了重大的进展,正好是中国旧历新年,气氛非常的壮观热闹。
接着是日本,澳大利亚,新西兰和夏威夷。我很难把我所有的感受都在这篇文章里写出来,但说这是此生难忘的一段历程是不为过的。把如此多的美景都放到一个国家里,太让人赞叹了,我说的正是新西兰。我最喜爱的一段记忆是沿着Wanaka的一个湖边在阳光下跑步,还有就是背着食物和生活用品,徒步数天穿越Routeburn的大山。在这个国家的旅途中,我结识了好几个值得一生相伴的好友。这是一个真正的天堂。
就在我环绕新西兰的南部岛屿时,我的书终于完成了,提交给了技术编辑校对。
接下来是纽约和旧金山,这两个神奇的地方到处是天才的程序员,有些人我很幸运的认识。Techcrunch Disrupt办的很精彩(我高度推荐hackathon)。
在从纽约到旧金山的中途停留期间,我在各种公司了进行了不少的求职面试,最终在Twitter公司找到了一份做前端开发的工作。要在那里和杰出的团队一起工作,我不能不高兴的颤抖,而去旧金山,同样也是我此生的一个梦想。
当签证的事办下来了后,我去了中、南美洲旅行,同时开发了我的一个小工程:一个JavaScript MVC框架库,叫做Spine。我到了哥斯达黎加,巴拿马,秘鲁,Bolvia,和阿根廷。 秘鲁是我的最爱,尽管那里的海拔给我带来了不少麻烦,我大部分的时间都在探险。下面的图片是哥斯达黎加传说中神奇猎鹰,是在我爬下世界最深的峡谷时拍到的。
当我在哥斯达黎加时,微博上有个叫Roberto的家伙给我发了条信息,说他读了我的书,问我是否有兴趣一起冲浪。我欣然同意,坐上去圣何塞的汽车,在几天后和他会了面。那天我们一起在他海边的公寓里开发Spine和Ruby项目,使用移动硬盘,用汽车电源给笔记本充电。当电量不足后,让太阳能板补充能量,我们去冲浪。
我推荐大家去写一本书,特别是边旅游边写书。可以想象,如果我不去旧金山去看一看,我可能还在旅途中,做顾问,去创业。当作家并不能让你直接的挣到很多钱,但它绝对能提升你的身份地位,给你带来很多潜在的机会。事实上,写作过程让我真正享受的是,我可以认真深入的研究一个题目。
这一年是我这辈子目前为止最好的一年,而我感觉今后的一年会更好。当我如今定居下来后,我并没有感觉旅行对我的吸引力减少了;我始终把签证放到一个口袋里,而另一个口袋里装着钱包,当召唤降临,随时准备离开。
可是,这篇文章并不是关于我的旅行,它是要发送一个信号:
对于程序员来说,有个得天独厚的条件,就是这种职业可以远程工作或边旅游边工作,这是其它职业办不到的。当然,也不都是这样,在我的旅途中,我没有碰到第二个跟我的做法相似的程序员。这种情况让人悲哀。我想向程序员们送出的信息是,不要再找借口了,行动起来,你可以做到。一个人只有一生,我可以向你保证,这样的生活才不枉世间走这一遭。
就像我,我感到极度的幸运,能这样的生活,去发现我的热情所在,去做每天我喜欢做的事情。你可以看出,大部分我现在的境遇并非偶然或侥幸,这是计划,追求,工作的结果。
一份汗水,一份收成。
这篇文章的目标不是做一些自我陶醉似的炫耀和大话,而是向大家演示如何立下目标,鼓励大家去做相似的事情。想清楚你现在的处境,这一年内你想得到什么,制定出一系列具体的能让你到达这些目标的步骤。追随你的梦想。
Yusaku Sako yusaku@hortonworks.com 透過 incubator.apache.org
3月5日 (3 天以前)
寄給 ambari-dev
Hi Su,
The "ambari" database has a schema called "ambari". This is accessible by
the user "ambari-server".
1. Connect to the "ambari" database with "ambari-server": psql -U
ambari-server ambari (the default password is "bigdata")
2. Once you are in the psql prompt, you can list all the tables with: \dt
ambari.*
3. You can query tables like this: select * from ambari.clusters;
Hope this helps.
Yusaku Sako yusaku@hortonworks.com 透過 incubator.apache.org
5:13 (8 小時前)
寄給 ambari-user
Hello Su,
Currently, Ambari Web does not expose a way to add new services after
the cluster is installed. However, this is possible with the
Management API.
Management API for doing POST, PUT, DELETE exists, but it has not been
documented.
This is something that we will have to work on.
One way is figure out the calls/payload format, etc, is to observe the
network traffic Ambari Web makes during the install process using
developer tools of the web browser.
[root@hct3 conf]# cat hive-site.xml
hive.metastore.local
false
hive.metastore.uris
thrift://hct2:9083
hive.server2.enable.doAs
true
hive.metastore.execute.setugi
true
hive.metastore.cache.pinobjtypes
Table,Database,Type,FieldSchema,Order
hive.metastore.warehouse.dir
/apps/hive/warehouse
hive.metastore.client.socket.timeout
60
javax.jdo.option.ConnectionPassword
hive
hive.security.authorization.enabled
true
javax.jdo.option.ConnectionURL
jdbc:mysql://hct2/hive?createDatabaseIfNotExist=true
hive.semantic.analyzer.factory.impl
org.apache.hivealog.cli.HCatSemanticAnalyzerFactory
javax.jdo.option.ConnectionUserName
hive
hadoop.clientside.fs.operations
true
javax.jdo.option.ConnectionDriverName
com.mysql.jdbc.Driver
fs.hdfs.impl.disable.cache
true
hive.security.authorization.manager
org.apache.hcatalog.security.HdfsAuthorizationProvider