2016年2月25日 星期四

[hadoop] could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation. 解決方案



當發生下面問題的解決思路

16/02/24 15:12:35 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/x1._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.




====

[hdfs@vagrant-centos65 tmp]$ hdfs dfs -put /tmp/x1 /tmp
16/02/24 15:12:35 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/x1._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2503)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2047)

at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at $Proxy9.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at $Proxy9.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1231)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
put: File /tmp/x1._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.

[hadoop] hadoop client failed on connection exception: java.net.ConnectException: Connection refused 解決方法


在 hadoop client 執行 hdfs 的指令 發生了下面的錯誤


[root@vagrant-centos652 conf]# hadoop fs -ls /
ls: Call From vagrant-centos652.vagrantup.com/127.0.0.1 to vagrant-centos65.vagrantup.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused



一般來說就是 client 並沒有辦法連結到 namenode ,
  • 先檢查一下 該來電腦 ping 的到 master 嗎?
  • telnet master-host port  
  • telnet  vagrant-centos65.vagrantup.com 8020
vagrant-centos65.vagrantup.com 是我的master 的hostname 已經寫在 /etc/hosts內了。

若是發現 telnet  Connection refused , 回到 namenode node 檢查 ip 是 bind 在哪個 ip 上

[vagrant@vagrant-centos65 ~]$ netstat -anpt  | grep "8020"
(No info could be read for "-p": geteuid()=500 but you should be root.)
tcp        0      0 10.1.193.179:8020           0.0.0.0:*                   LISTEN      -                   
tcp        0      0 10.1.193.179:8020           10.1.193.149:56089          ESTABLISHED -                   
tcp        0      0 10.1.193.179:54539          10.1.193.179:8020           TIME_WAIT   -      

如果  bind 127.0.0.1  可以檢查一下 /etc/hosts內的設定
[vagrant@vagrant-centos65 ~]$ cat /etc/hosts
127.0.0.1   localhost
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.1.193.149  vagrant-centos652.vagrantup.com
10.1.193.179  vagrant-centos65.vagrantup.com

===
查看 hostname 是不是跟 127.0.0.1 寫在一起,有的單機的環境教學會建議寫在 localhost 同一行


其他的 error message 還可能會發生

16/02/24 15:12:35 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/x1._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.


[hdfs@vagrant-centos65 tmp]$ hdfs dfs -put /tmp/x1 /tmp
16/02/24 15:12:35 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/x1._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2503)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2047)

at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at $Proxy9.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at $Proxy9.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1231)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
put: File /tmp/x1._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.



[hadoop] hadoop namenode format 格式化 hdfs / 重置 hdfs 空間


[hadoop] hadoop namenode format 格式化 hdfs /  重置 hdfs 空間  


安裝 hadoop hdfs 重要的一步就是 namenode format ,如果檔案損毀想要放棄 測試用的cluster 或是重置所有的資料,也可以使用 hadoop namenode format

"The first step to starting up your Hadoop installation is formatting the Hadoop filesystem, which is implemented on top of the local filesystems of your cluster. You need to do this the first time you set up a Hadoop installation. Do not format a running Hadoop filesystem, this will cause all your data to be erased."

先切換 成 hdfs user
一般來說是用 hdfs 去建立相關的資料夾

su hdfs

% $HADOOP_INSTALL/hadoop/bin/hadoop namenode -format


GettingStartedWithHadoop - Hadoop Wiki
https://wiki.apache.org/hadoop/GettingStartedWithHadoop#Formatting_the_Namenode

2016年2月23日 星期二

[hadoop] hadoop namenode 怎麼離開 safemode Name node is in safe mode.


[hadoop] hadoop namenode 怎麼離開 safemode Name node is in safe mode. 


操作 HDFS 時,可能會遇到 Name node is in safe mode. 這種情況。

如果只是在開發測試環境,可以直接嘗試下面的作法離開 safemode。單機情況下,可以直接離開safemode。
(當 hdfs file 都符合 replica 的條件,才會脫離 safemode 。)

Safemode

During start up the NameNode loads the file system state from the fsimage and the edits log file. It then waits for DataNodes to report their blocks so that it does not prematurely start replicating the blocks though enough replicas already exist in the cluster. During this time NameNode stays in Safemode. Safemode for the NameNode is essentially a read-only mode for the HDFS cluster, where it does not allow any modifications to file system or blocks. Normally the NameNode leaves Safemode automatically after the DataNodes have reported that most file system blocks are available. If required, HDFS could be placed in Safemode explicitly using bin/hdfs dfsadmin -safemode command. NameNode front page shows whether Safemode is on or off. A more detailed description and configuration is maintained as JavaDoc for setSafeMode().



[root@vagrant-centos65 conf]#  sudo -u hdfs hadoop fs -mkdir -p /test
mkdir: Cannot create directory /test. Name node is in safe mode.

[root@vagrant-centos65 conf]# sudo -u hdfs hdfs dfsadmin -safemode leave

Safe mode is OFF

Google 建立的帳戶數量已達上限 同一支電話註冊上限 gmail


當你使用同一個電話驗證的時候,如果出現了這個情況。
可能是已經達到同一個電話號碼有25個帳號,或是連續註冊的情況發生了。

「建立的帳戶數量已達上限」


如果您看到「以這個電話號碼建立的帳戶數量已達上限」錯誤訊息,請改用其他號碼。為了保護使用者並防止服務遭到濫用,我們對於每個電話號碼能夠建立的帳戶數量設有限制。

"Maximum number of accounts reached"


If you see the error message, "This phone number has already created the maximum number of accounts," you'll have to use a different number. In an effort to protect our users from abuse, we limit the number of accounts each phone number can create.

2016年2月21日 星期日

[startup] 關於 Nextdoor 國外的 LBS social network 服務


Nextdoor: Join the free private social network for your neighborhood
https://nextdoor.com/

國外LBS的服務其實好像還是滿多的,而且都很蓬勃發展。
台灣人因為上班工作時間太長,所以根本沒有其他這種需求。

導致台灣的創業環境裏面這種服務好像不太多。
頂多就是追求美食之類的小確幸服務。

但是外國人就會有很多這種跟鄰近地區的服務與需求。


2016年2月20日 星期六

關於學習 2016 之一 關於 學習Flash


關於學習的模式,這是一個很多人一直在探討的的議題。
當心思沈靜的時候,你才會注意到自己的思考,也就是說你才能反觀自己,了解自己到底再做怎樣的一個思考,怎麼樣使用一些習以為常的脈絡。

不過,紛亂與無心思的投入,確實會讓自己忘了思想與模式。

回想起最初的學習,都是投入一段時間,反覆的練習而獲得成果的。

我記得我學習Flash 的時候,大概在國一國二,幾乎就是每天如果是4點放學,冬天的話應該4點十五分就會到家。吃完點心,就趕快開始照著書本上的範例一個一個的練習,練習到晚餐6點的時候。

我相信要是沒有當時投入時間的試誤,不會有現在對電腦與系統的敏銳度。

曾幾何時,我也差點忘了這是必然的投入。

如果沒有投入,怎麼會有像是反射動作的成果呢。


2016年2月19日 星期五

[mac] 使用 Monolingual 節省 mac 空間 移除沒用到的多國語系檔


使用 SSD 較小的 Mac air 常常發現自己的檔案沒有使用到這麼多空間,有些是軟體安裝所佔用的。軟體裡面還有很多國語系檔。
Monolingual 就是一套幫你清理那些用不到的語系檔的軟體。




Monolingual

Monolingual is a program for removing unnecessary language resources from OS X, in order to reclaim several hundred megabytes of disk space. It requires a 64-bit capable Intel-based Mac and at least OS X 10.11 (El Capitan).
I don't know about you, but I use my computer in only one (human) language — English. And I'm willing to bet that you do too, albeit perhaps not English. So why do you have a bunch of localization files for the operating system filling up your hard drive? Enter Monolingual — a handy utility for reclaiming your space for more useful things… like international mp3 files, email or whatever you like.
Version 1.6.7 is the last version for OS X 10.10 (Yosemite). Version 1.5.10 is the last version for Mac OS X 10.7 (Lion), OS X 10.8 (Mountain Lion) and OS X 10.9 (Mavericks). Version 1.4.5 is the last version for Mac OS X 10.6 (Snow Leopard) which also includes PowerPC support. Version 1.3.9 is the last version for Mac OS X 10.4 (Tiger).


Monolingual
https://ingmarstein.github.io/Monolingual/

2016年2月18日 星期四

[docker] 使用 socks5 當做 docker 的 http proxy


[docker] 使用 socks5 當做 docker 的 http proxy

環境 Ubuntu Linux
  • 先建好要使用的 socks5

    ssh -D 8118 user@server 
  • 安裝  tsocks - transparent network access through a SOCKS 4 or 5 proxy

    sudo apt-get install tsocks
  • 設定 docker 使用 http proxy

    vim /etc/systemd/system/docker.service.d/http-proxy.conf

    加入

    [Service]
    Environment="HTTP_PROXY=http://127.0.0.1:8118/"
  • 使用 tsocks 啟動 docker 
tsocks /usr/bin/docker -d

看到 docker的log就知道 已經使用設定好的socks5 當做 http proxy server了