Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 JoeJulian yikes, no.
00:04 MugginsM it's only one side, and only on some of the bricks
00:04 MugginsM clients seem fairly happy that it's all still working
00:05 MugginsM although maybe if it's not healing right they wouldn't notice immediately
00:05 cliluw joined #gluster
00:07 Mr_Psmith joined #gluster
00:08 JoeJulian So it's in the brick logs?
00:08 MugginsM aha! forgot to check them, was in shd log, but the brick logs say more, just a mo
00:10 MugginsM "[2015-09-04 00:10:10.912129] E [rpcsvc.c:136:rpcsvc_get_program_vector_sizer] 0-rpc-service: RPC procedure 2 not available for Program GF-DUMP"
00:10 MugginsM was in the brick log
00:10 JoeJulian that's weird. rpc procedure 2 is rpc_ping.
00:11 MugginsM lots of, many a second, just on some bricks
00:11 MugginsM the two servers are AWS,  different AZ but same VLAN
00:11 MugginsM no known firewall/network weirdness
00:12 MugginsM wait, it's saying that it can't ping, not that ping is failing?
00:12 JoeJulian No, it's saying that it doesn't know the definition of "GF_DUMP_PING".
00:13 JoeJulian which is an enum and evaluates to "2"
00:15 JoeJulian Didn't you do an upgrade recently?
00:15 MugginsM yes. to a fresh Ubuntu 14.04 with included 3.4.5. Then upgraded to 3.6.5
00:15 MugginsM maybe some old 3.4 libs n the way or something?
00:16 JoeJulian Yep, that brick is still running 3.4
00:16 JoeJulian Because ubuntu, in their infinite wisdom, thinks it's a fabulous idea to start services as soon as you install a package.
00:17 JoeJulian And when you upgrade it, it doesn't kill the brick servers (glusterfsd).
00:17 MugginsM ah yeah, been bitten by that before :-/
00:18 JoeJulian As you may or may not be able to surmise, I'm not a huge fan of that policy.
00:18 MugginsM yeah, I see it. the problem bricks have older processes
00:18 MugginsM don't get me on my Ubuntu is not a Server OS soapbox :)
00:18 JoeJulian I'm doing the snoopy dance here for figuring that one out so quickly... :D
00:19 MugginsM nice work :)
00:20 ndevos joined #gluster
00:20 ndevos joined #gluster
00:23 MugginsM ok restarted all the affected processes and it's looking a lot happier
00:24 JoeJulian Excellent!
00:25 shaunm joined #gluster
00:26 MugginsM things have generally been heaps smoother with 3.6.x
00:32 amye joined #gluster
00:50 amye joined #gluster
01:02 plarsen joined #gluster
01:13 Merlin_ joined #gluster
01:27 MugginsM joined #gluster
01:36 MugginsM joined #gluster
01:40 amye joined #gluster
01:42 weijin joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:54 nishanth joined #gluster
02:02 badone__ joined #gluster
02:05 gildub joined #gluster
02:07 Gill joined #gluster
02:11 harish_ joined #gluster
02:12 Saravana_ joined #gluster
02:14 badone_ joined #gluster
02:15 MugginsM joined #gluster
02:30 MugginsM joined #gluster
02:33 maveric_amitc_ joined #gluster
02:34 nangthang joined #gluster
02:46 Merlin_ joined #gluster
02:47 MugginsM joined #gluster
02:51 overclk joined #gluster
02:53 amye left #gluster
02:53 MugginsM joined #gluster
03:02 Lee1092 joined #gluster
03:10 vishvendra joined #gluster
03:18 [7] joined #gluster
03:24 sakshi joined #gluster
03:24 overclk joined #gluster
03:28 kdhananjay joined #gluster
03:30 MugginsM joined #gluster
03:41 julim joined #gluster
03:44 maveric_amitc_ joined #gluster
03:47 calisto joined #gluster
03:50 calisto left #gluster
03:52 neha joined #gluster
03:55 ChrisNBlum joined #gluster
04:00 ashiq joined #gluster
04:05 itisravi joined #gluster
04:10 nhayashi joined #gluster
04:10 MugginsM joined #gluster
04:15 kanagaraj joined #gluster
04:16 ppai joined #gluster
04:16 RameshN joined #gluster
04:17 MugginsM joined #gluster
04:23 bharata-rao joined #gluster
04:23 weijin joined #gluster
04:28 dusmant joined #gluster
04:33 MugginsM joined #gluster
04:35 rafi joined #gluster
04:42 MugginsM joined #gluster
04:45 nhayashi joined #gluster
04:46 ramky joined #gluster
04:49 deepakcs joined #gluster
04:51 overclk joined #gluster
04:57 ndarshan joined #gluster
04:57 MugginsM joined #gluster
05:01 Bhaskarakiran joined #gluster
05:07 pppp joined #gluster
05:07 kdhananjay joined #gluster
05:10 yazhini joined #gluster
05:11 pppp joined #gluster
05:14 kotreshhr joined #gluster
05:16 vmallika joined #gluster
05:17 kotreshhr joined #gluster
05:17 skoduri joined #gluster
05:20 vimal joined #gluster
05:22 atinm joined #gluster
05:25 poornimag joined #gluster
05:28 dusmant joined #gluster
05:29 Bhaskarakiran joined #gluster
05:33 nbalacha joined #gluster
05:35 shubhendu joined #gluster
05:36 Manikandan joined #gluster
05:45 DocGreen joined #gluster
05:47 aravindavk joined #gluster
05:49 Saravana_ joined #gluster
05:49 atalur joined #gluster
05:50 kotreshhr left #gluster
05:53 hgowtham joined #gluster
05:53 kotreshhr joined #gluster
05:58 kotreshhr joined #gluster
06:02 Bhaskarakiran joined #gluster
06:07 raghu joined #gluster
06:12 jwd joined #gluster
06:12 jwaibel joined #gluster
06:12 arcolife joined #gluster
06:13 jtux joined #gluster
06:17 jiffin joined #gluster
06:18 maveric_amitc_ joined #gluster
06:19 mhulsman joined #gluster
06:19 elico joined #gluster
06:20 kshlm joined #gluster
06:22 aravindavk joined #gluster
06:29 kotreshhr joined #gluster
06:31 kovshenin joined #gluster
06:33 Trefex joined #gluster
06:47 rgustafs joined #gluster
06:48 rgustafs_ joined #gluster
06:51 raghu joined #gluster
06:53 nangthang joined #gluster
06:53 nangthang joined #gluster
06:55 DocGreen joined #gluster
06:57 DV joined #gluster
06:59 anil joined #gluster
07:09 skoduri joined #gluster
07:09 deniszh joined #gluster
07:10 aravindavk joined #gluster
07:11 poornimag joined #gluster
07:11 Manikandan joined #gluster
07:15 Bhaskarakiran joined #gluster
07:17 auzty joined #gluster
07:27 svalo joined #gluster
07:35 ctria joined #gluster
07:42 paescuj_ joined #gluster
07:47 deniszh joined #gluster
07:51 skoduri joined #gluster
07:53 poornimag joined #gluster
07:53 jcastill1 joined #gluster
07:58 jcastillo joined #gluster
08:03 kotreshhr left #gluster
08:08 arcolife joined #gluster
08:08 sakshi joined #gluster
08:16 [Enrico] joined #gluster
08:23 aravindavk joined #gluster
08:24 s19n joined #gluster
08:25 Slashman joined #gluster
08:29 LebedevRI joined #gluster
08:33 fsimonce joined #gluster
08:35 karnan joined #gluster
08:37 mhulsman joined #gluster
08:54 overclk joined #gluster
09:01 kotreshhr joined #gluster
09:10 mhulsman1 joined #gluster
09:20 Bhaskarakiran joined #gluster
09:22 xavih joined #gluster
09:34 kotreshhr1 joined #gluster
09:35 kshlm joined #gluster
09:42 ramky joined #gluster
09:48 Trefex joined #gluster
09:49 kovshenin joined #gluster
09:54 arcolife joined #gluster
10:02 overclk joined #gluster
10:03 kotreshhr joined #gluster
10:10 anil joined #gluster
10:18 weijin joined #gluster
10:22 vmallika joined #gluster
10:28 overclk joined #gluster
10:34 ira joined #gluster
10:49 David______ joined #gluster
10:50 David______ hello. i have question on how to add 2 new node to 2 existing node? it will be great if anyone can help me with it. thanks in advance.
10:51 David______ anyone can help me?
10:51 arcolife joined #gluster
10:53 David______ hello?
10:53 glusterbot David______: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:56 itisravi David______: Peer probe 2 two new nodes from any of the existing nodes.
11:01 David_Varghese joined #gluster
11:02 David_Varghese hello again. i got disconected. i have question on how to add 2 new node to 2 existing node? it will be great if anyone can help me with it. thanks in advance.
11:02 David_Varghese originally i just use this command on 1 of 2 the existing node : sudo gluster volume create volume_1 replica 2 transport tcp web1:/gluster-storage web2:/gluster-storage force
11:03 David_Varghese everything is replicated nicely and have same data on both node. and now i want to add 2 new node and i want it replicate to all 4. i want all 4 node with have all same replicated copies.
11:03 skoduri_ joined #gluster
11:04 poornimag David_Varghese, If you have a cluster of 2 nodes already, add the other 2 nodes to the cluster by executing:
11:04 arcolife joined #gluster
11:04 poornimag gluster peer probe <node3> gluster peer probe <node4> on either node1/2
11:05 poornimag then execute a add brick command to add bricks to the volue
11:08 shyam joined #gluster
11:08 David_Varghese poornimag, i already peer probe the new node on node1
11:08 kdhananjay joined #gluster
11:08 poornimag David_Varghese, the command being: gluster volume add brick replica 4 <vol> node3:brick node4:brick
11:09 David_Varghese poornimag, i want to know the exact command on how the 2 new node will get replicated same data
11:09 poornimag David_Varghese, s/add brick/add-brick
11:09 David_Varghese poornimag, i just need to excute that command? and dont need to change anything for the existing node?
11:11 David_Varghese poornimag, because the existing node i excute with replica 2
11:12 David_Varghese poornimag, i got confuse with joejulian artical that said i need to move/replace the brick?
11:13 David_Varghese poornimag, the article i read is how-to-expand-glusterfs-repli​cated-clusters-by-one-server in his blog
11:14 anoopcs vim
11:15 poornimag David_Varghese, So to populate the existing data to the newly added bricks, you will have to execute "gluster volume heal <vol> full"
11:15 marlinc joined #gluster
11:16 David_Varghese poornimag, i need to execute that command on each new node or is one of existing node?
11:16 poornimag David_Varghese, on any one of the nodes
11:18 David_Varghese poornimag, do i need to specify tcp to add new node as in i specify when i create the volume?
11:18 David_Varghese poornimag, the original excution when create the volume : sudo gluster volume create volume_1 replica 2 transport tcp web1:/gluster-storage web2:/gluster-storage force
11:19 David_Varghese poornimag, do i need to specify for also in the exuction?
11:19 David_Varghese poornimag, do i need to specify force also in the exuction?
11:20 marlinc joined #gluster
11:22 marlinc joined #gluster
11:23 poornimag David_Varghese, force is not required if you had provided a second level directory as brick, but for now you can provide force option for add-brick as well
11:23 David_Varghese poornimag, sudo gluster volume add brick replica 4 volume_1  web3.:/gluster-storage  web4:/gluster-storage ?
11:23 David_Varghese poornimag, or sudo gluster volume add brick replica 4 transport tcp volume_1  web3.:/gluster-storage  web4:/gluster-storage force?
11:25 firemanxbr joined #gluster
11:26 poornimag David_Varghese, sudo gluster volume add-brick replica 4 volume_1 web3:/gluster-storage web4:/gluster-storage force
11:27 poornimag David_Varghese, you can verify by executing gluster vol info
11:28 Trefex joined #gluster
11:29 David_Varghese poornimag, i get this when excute: Usage: volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ... [force]
11:30 poornimag David_Varghese, you will have to specify add-brick volume_1 replica 4 ....
11:31 David_Varghese poornimag, i did copy and paste the excution.
11:33 David_Varghese poornimag, ok got it. its not in right order
11:34 overclk joined #gluster
11:35 David_Varghese poornimag, do i need to start the volume after heal?
11:36 poornimag David_Varghese, Is the volume in stopped state? you will have to start to be able to mount it
11:40 overclk_ joined #gluster
11:44 David_Varghese poornimag, how to check the state?
11:45 David_Varghese poornimag, how do i know its has finish healing? Brick web3:/gluster-storage/
11:45 David_Varghese Number of entries: 0
11:46 mhulsman joined #gluster
11:47 arcolife joined #gluster
11:47 rgustafs joined #gluster
11:48 harish_ joined #gluster
11:53 poornimag David_Varghese, You can check the link https://access.redhat.com/documentation/en-US/R​ed_Hat_Storage/2.0/html/Administration_Guide/se​ct-User_Guide-Managing_Volumes-Self_heal.html
11:53 glusterbot Title: 10.8. Triggering Self-Heal on Replicate (at access.redhat.com)
11:54 amye joined #gluster
12:01 Bonaparte joined #gluster
12:04 Mr_Psmith joined #gluster
12:04 pdrakeweb joined #gluster
12:06 [Enrico] joined #gluster
12:12 aravindavk joined #gluster
12:12 jtux joined #gluster
12:16 David_Varghese poornimag, healing cause all my node down. and data still not replicated to both new node. how can i fix this?
12:17 atalur_ joined #gluster
12:26 mhulsman joined #gluster
12:27 Akee joined #gluster
12:28 s19n David_Varghese: find . -noleaf -print0 | xargs --null stat > /dev/null # from the gluster mountpoint on a client
12:28 s19n that should trigger self-heal
12:30 mhulsman1 joined #gluster
12:31 ashiq joined #gluster
12:32 julim joined #gluster
12:35 jcastill1 joined #gluster
12:36 cyberswat joined #gluster
12:38 spalai joined #gluster
12:39 kovshenin joined #gluster
12:40 jcastillo joined #gluster
12:42 David_Varghese s19n,  currently my server is so slow since excute full heal. i dont know what is actualy happening that cause slow response
12:45 s19n it could be the full heal itself. 4 nodes with 1 brick each in replica 4? How are the nodes connected?
12:45 Bonaparte left #gluster
12:45 Saravana_ joined #gluster
12:51 shyam joined #gluster
12:56 aravindavk joined #gluster
13:03 gildub joined #gluster
13:07 David_Varghese s19n, all in same datacenter
13:07 s19n David_Varghese: with 1Gb/s links? 10Gbit? Bonding/LACP/?
13:09 mhulsman joined #gluster
13:11 jcastill1 joined #gluster
13:15 drankis joined #gluster
13:17 David_Varghese actually all node running digitalocean droplet
13:17 jcastillo joined #gluster
13:17 David_Varghese s19n, actually all node running digitalocean droplet
13:17 Bonaparte joined #gluster
13:18 Bonaparte glusterd fails to start. http://paste2.org/MC5MKg2F
13:18 glusterbot Title: Paste2.org - Viewing Paste MC5MKg2F (at paste2.org)
13:20 s19n David_Varghese: it could be interesting to run a quick test with netperf / iperf to measure the available bandwidth between peers
13:21 haomaiwa_ joined #gluster
13:21 Bonaparte Could someone suggest how to go about troubleshooting it?
13:24 David_Varghese s19n, im not sure how to to test
13:25 David_Varghese s19n, how to check if the server have the same data? is there a way to check instead of listing all the file and check everything is there?
13:26 s19n David_Varghese: in a while you should see the new nodes' available space decrease
13:27 s19n available space or available inodes count
13:28 unclemarc joined #gluster
13:32 harold joined #gluster
13:36 dgandhi joined #gluster
13:41 mhulsman joined #gluster
13:47 bennyturns joined #gluster
13:50 cyberswat joined #gluster
13:58 mhulsman1 joined #gluster
14:01 haomaiwa_ joined #gluster
14:02 marlinc joined #gluster
14:03 plarsen joined #gluster
14:03 jobewan joined #gluster
14:07 ramky joined #gluster
14:11 Bonaparte glusterd fails to start. http://paste2.org/MC5MKg2F Could someone suggest how to go about troubleshooting it?
14:11 glusterbot Title: Paste2.org - Viewing Paste MC5MKg2F (at paste2.org)
14:23 s19n Bonaparte: missing /usr/lib64/glusterfs/3.7.4/rpc-transport/rdma.so ?
14:30 Bonaparte s19n, just fixed the issue be deleting everything by .info
14:31 arcolife joined #gluster
14:45 cholcombe joined #gluster
14:50 nbalacha joined #gluster
15:01 haomaiwa_ joined #gluster
15:01 itisravi joined #gluster
15:03 rafi joined #gluster
15:09 itisravi joined #gluster
15:11 itisravi joined #gluster
15:14 drankis joined #gluster
15:18 weijin joined #gluster
15:24 rafi joined #gluster
15:27 atalur joined #gluster
15:27 tru_tru joined #gluster
15:39 itisravi joined #gluster
15:46 nbalacha joined #gluster
15:55 rafi1 joined #gluster
15:56 genunix joined #gluster
15:56 yosafbridge joined #gluster
15:58 neofob joined #gluster
16:01 haomaiwa_ joined #gluster
16:06 rafi joined #gluster
16:07 rafi joined #gluster
16:08 social joined #gluster
16:19 jbrooks joined #gluster
16:20 jbrooks joined #gluster
16:20 calavera joined #gluster
16:28 wushudoin| joined #gluster
16:29 rafi joined #gluster
16:33 wushudoin| joined #gluster
16:47 natgeorg joined #gluster
16:50 kdhananjay joined #gluster
16:56 Gill joined #gluster
16:59 cyberswa_ joined #gluster
17:01 haomaiwa_ joined #gluster
17:15 amye joined #gluster
17:42 Pupeno joined #gluster
17:46 timotheus1 joined #gluster
17:51 skoduri joined #gluster
17:53 rafi joined #gluster
17:59 calisto joined #gluster
18:01 haomaiwa_ joined #gluster
18:17 rwheeler joined #gluster
18:30 Rapture joined #gluster
18:37 marlinc joined #gluster
18:42 skoduri joined #gluster
19:01 7GHAA4I1O joined #gluster
19:16 Jas joined #gluster
19:16 Guest41143 Hi Guys
19:17 Guest41143 One of my Bricks have gone offline
19:17 Guest41143 And Im not sure. How I can get it back to online status
19:17 Guest41143 Could someone please help me?
19:23 squizzi joined #gluster
19:43 Pupeno joined #gluster
19:56 Bonaparte Guest41143, what exactly do you mean by brick going offline?
19:57 Guest41143 Bonaparte. when I do a gluster volume status. I can see that one of the volumes is offline
19:58 Bonaparte Is your brick not shown? Or volume not shown? Paste some output somewhere
19:59 Guest41143 Any ideas or suggestions?
19:59 Guest41143 on the system
20:00 Bonaparte Guest41143, ^
20:01 corretico joined #gluster
20:01 haomaiwa_ joined #gluster
20:02 JoeJulian On the server with the failed brick, "gluster volume start $volname force" will restart that brick (unless there's a reason for it to be down).
20:02 JoeJulian Check your brick logs.
20:13 boom joined #gluster
20:13 DV joined #gluster
20:14 boom666 joined #gluster
20:14 leathermask joined #gluster
20:15 l0uis_ joined #gluster
20:15 leathermask hi - got a question. is it possible for a gluster server to mount its own glusterfs filesystem? ie, can a server in the cluster also be a client?
20:15 malevolent_ joined #gluster
20:15 acidchil1 joined #gluster
20:15 d-fence_ joined #gluster
20:15 markd_ joined #gluster
20:15 portante_ joined #gluster
20:16 Intensity joined #gluster
20:16 Intensity joined #gluster
20:16 jatb joined #gluster
20:16 doekia_ joined #gluster
20:16 frankS2_ joined #gluster
20:16 leathermask i've tried to mount the glusterfs on a different mount point and the server just hangs
20:16 genunix joined #gluster
20:16 PaulePanter joined #gluster
20:16 DJClean joined #gluster
20:16 cliluw joined #gluster
20:17 [o__o] joined #gluster
20:17 xiu joined #gluster
20:17 JoeJulian yes
20:17 leathermask any idea why the server would hang trying to mount?
20:18 JoeJulian Would need to have a look at the client log. ,,(paste)
20:18 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
20:19 semiosis joined #gluster
20:19 semiosis joined #gluster
20:21 milkyline joined #gluster
20:22 leathermask ok thanks
20:24 leathermask good grief - its work now. used a different mount point. switched from /mnt/glusterfs to /data
20:29 hagarth joined #gluster
20:33 Rapture joined #gluster
20:34 JoeJulian semiosis: The most recent occasion: https://botbot.me/freenode/gluster/​2015-09-03/?msg=48901738&amp;page=5
20:34 glusterbot Title: IRC Logs for #gluster | BotBot.me [o__o] (at botbot.me)
20:34 semiosis thanks
20:37 semiosis JoeJulian: EBS uses delayed allocation, so if you create a 100GB disk only a few blocks are allocated when you make a FS on it.  the first time you write into EBS can be much slower than subsequent writes because the blocks need to be allocated.  that might explain why writing a ton of bytes to a new EBS makes the brick slow
20:38 JoeJulian Hmm, could be.
20:39 semiosis a good test would be using dd to write zeroes to the ebs vol before formatting it and mounting as a brick
20:46 weijin joined #gluster
20:53 RedW joined #gluster
20:58 lanning joined #gluster
20:58 lanning_ joined #gluster
20:58 lanning__ joined #gluster
20:58 lanning_1 joined #gluster
20:59 lanning_2 joined #gluster
21:01 social joined #gluster
21:01 haomaiwa_ joined #gluster
21:05 diegows joined #gluster
21:08 natgeorg joined #gluster
21:09 lanning_1 joined #gluster
21:10 papamoose1 joined #gluster
21:20 lanning_2 joined #gluster
21:28 plarsen joined #gluster
21:31 diegows joined #gluster
21:32 amye joined #gluster
21:32 daveMh joined #gluster
21:34 daveMh hey folks, I'm looking at libgfapi-python and I'm wondering if a gluster server sends any signals about file operations in a volume to its connected clients. ideally I'd like the server to tell me if a file/directory was changed so I can flush some local caches
21:34 kkeithley joined #gluster
21:35 daveMh joined #gluster
21:43 Pupeno joined #gluster
22:01 haomaiwang joined #gluster
22:11 DV joined #gluster
22:17 chuz04arley_ joined #gluster
22:45 julim joined #gluster
22:46 Mr_Psmith joined #gluster
23:01 haomaiwa_ joined #gluster
23:11 rafi1 joined #gluster
23:13 Pupeno joined #gluster
23:40 haomaiw__ joined #gluster
23:41 amye joined #gluster
23:48 muneerse joined #gluster
23:55 amye joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary