Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 samikshan joined #gluster
00:01 haomaiwa_ joined #gluster
00:03 g3kk0 joined #gluster
00:05 jhyland joined #gluster
00:08 mpingu joined #gluster
00:09 amye joined #gluster
00:17 hagarth joined #gluster
00:36 calavera joined #gluster
00:45 muneerse2 joined #gluster
00:46 liviudm_ joined #gluster
00:48 _nixpanic joined #gluster
00:48 juhaj_ joined #gluster
00:48 R0ok__ joined #gluster
00:48 lanning_ joined #gluster
00:48 _nixpanic joined #gluster
00:48 alghost joined #gluster
00:49 siel joined #gluster
00:49 siel joined #gluster
00:49 JoeJulian_ joined #gluster
00:49 yosafbridge joined #gluster
00:49 shyam joined #gluster
00:50 ashiq joined #gluster
00:50 papamoose joined #gluster
00:50 moss joined #gluster
00:51 kalzz joined #gluster
00:51 johnmilton joined #gluster
00:54 sagarhani joined #gluster
01:01 haomaiwa_ joined #gluster
01:04 ovaistariq joined #gluster
01:06 jhyland joined #gluster
01:08 jhyland joined #gluster
01:21 nathwill joined #gluster
01:43 mowntan joined #gluster
01:58 ovaistariq joined #gluster
01:59 haomaiwa_ joined #gluster
02:02 baojg joined #gluster
02:14 Lee1092 joined #gluster
02:20 gem joined #gluster
02:24 Merlin_ joined #gluster
02:37 nishanth joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:59 arcolife joined #gluster
02:59 the-me joined #gluster
03:01 tessier volume start: 9e: failed: Staging failed on 230089fe-021d-4c52-b190-24af095070fb. Error: Volume id mismatch for brick 10.0.1.20:/gluster/e/brick. Expected volume id dd1577e3-29bf-415d-877f-d0c7a289ead1, volume id 7fc40a98-9144-478f-a416-af91b5347505 found
03:02 tessier ugh....I just restarted gluster and now things are confuse.d
03:04 tessier I haven't moved around any blocks or anything.
03:09 overclk joined #gluster
03:26 overclk joined #gluster
03:33 JoeJulian tessier: Looks like your brick devices mounted on the wrong mountpoints.
03:34 gem joined #gluster
03:34 JoeJulian tessier: Are you mounting them by /dev/sdx1 or by uuid, partuuid, or label? Perhaps the devices readied in a different order than normal.
03:36 Manikandan joined #gluster
03:37 tessier JoeJulian: Ah...I did recently reboot this box. I'm mounting them by /dev/sdc1 etc.
03:37 JoeJulian Betcha that's it.
03:37 atinm joined #gluster
03:40 tessier Do most people use uuid these days?
03:42 JoeJulian I like partuuid
03:44 JoeJulian To be honest, though, I make my own udev rules to that every enclosure and slot in the enclosure are identified /dev/enclosure0/slot0
03:45 RameshN joined #gluster
03:45 tessier You are right. This thing is hopelessly confused because things are mounted in the wrong place after the reboot. One disk even failed to mount at all. But gluster created the brick/ under where the mountpoint should be and copied a bunch of data into it.
03:45 JoeJulian Odd. That shouldn't have happened unless you did a start...force
03:45 tessier Hopefully when I mount the correct volume there gluster will see that the volume is out of date and copy the stuff over from the other side which does appear to have received a good copy into the risk disk.
03:46 tessier Ah....I did do a start force. Because I thought I was seeing a repeat of a previous problem I had had where start force was the solution.
03:47 tessier Glad it sanity checked the volume IDs and didn't let me start force on the ones that were mounted in the wrong places!
03:47 JoeJulian Me too!
03:51 hgowtham joined #gluster
03:51 tessier Now I get to go through 24 disks and figure out which ones belong where and checking their partuuid and setting up fstab to mount them correctly.
03:51 unforgiven512 joined #gluster
03:53 tessier And then I get to reconfigure my other machines to mount by partuuid too.
03:53 tessier So we don't have a repeat of this. Glad I only patched and rebooted one side at a time.
03:53 JoeJulian Yay! Job security! ;)
03:53 tessier :)
03:56 tessier My VM's all currently use iscsi with software mirroring across two machines. So whenever I patch/reboot one side I have to split the mirrors, patch/reboot, rejoin the mirrors. A serious PITA.  I've been slowly working on getting to the point (not there yet) where I can move my VM images over to gluster and libgfapi and not have to deal with that.
03:56 JoeJulian Nice
03:57 JoeJulian I'm not an iscsi fan. It's to delicate of a flower for me.
03:59 johnmilton joined #gluster
04:05 itisravi joined #gluster
04:05 shubhendu joined #gluster
04:15 amye joined #gluster
04:15 ppai joined #gluster
04:16 johnmilton joined #gluster
04:20 pur joined #gluster
04:22 hgowtham joined #gluster
04:22 nbalacha joined #gluster
04:22 ppai joined #gluster
04:24 atinm joined #gluster
04:29 dlambrig__ joined #gluster
04:30 karnan joined #gluster
04:30 tswartz joined #gluster
04:40 nehar joined #gluster
04:50 nehar joined #gluster
05:00 surabhi joined #gluster
05:01 ndarshan joined #gluster
05:02 ovaistariq joined #gluster
05:03 prasanth joined #gluster
05:03 surabhi joined #gluster
05:08 ndarshan joined #gluster
05:11 kotreshhr joined #gluster
05:16 aravindavk joined #gluster
05:20 karthikfff joined #gluster
05:20 gem joined #gluster
05:21 EinstCrazy joined #gluster
05:22 aravindavk joined #gluster
05:25 poornimag joined #gluster
05:29 skoduri joined #gluster
05:32 Apeksha joined #gluster
05:32 kdhananjay joined #gluster
05:32 tswartz joined #gluster
05:32 spalai joined #gluster
05:36 jiffin joined #gluster
05:45 gowtham joined #gluster
05:46 k-ma joined #gluster
05:46 foster joined #gluster
05:46 atrius_ joined #gluster
05:46 purpleidea joined #gluster
05:47 vshankar joined #gluster
05:49 Saravanakmr joined #gluster
05:49 21WAAEJW5 joined #gluster
05:49 overclk joined #gluster
05:51 Bhaskarakiran joined #gluster
06:00 Merlin_ joined #gluster
06:01 haomaiwa_ joined #gluster
06:02 kshlm joined #gluster
06:04 jiffin1 joined #gluster
06:08 ramky joined #gluster
06:08 k-ma joined #gluster
06:08 foster joined #gluster
06:08 atrius_ joined #gluster
06:08 purpleidea joined #gluster
06:08 ramky joined #gluster
06:09 skoduri joined #gluster
06:10 surabhi joined #gluster
06:11 nishanth joined #gluster
06:15 ovaistariq joined #gluster
06:19 Gnomethrower joined #gluster
06:20 m0zes joined #gluster
06:26 johnmilton joined #gluster
06:43 karthikfff joined #gluster
06:45 atalur joined #gluster
06:45 Slydder joined #gluster
06:45 Slydder morning all
06:46 Slydder is there a way to check the latency of gluster without enabling profiling?
06:46 gem joined #gluster
06:48 Gaurav_ joined #gluster
06:54 anil joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 mmckeen joined #gluster
07:04 haomaiwa_ joined #gluster
07:06 kshlm Slydder, AFAIK nope.
07:06 kshlm We only collect the timing stats when profiling is enabled.
07:10 JoeJulian That's the network uncertainty principal.
07:10 Slydder trying to find the main metrics to send to grafana for graphing and the 1 metric that matters the most is only available if profiling is enabled. which, of course, is a major performance crutch.
07:10 JoeJulian "of course"?
07:11 JoeJulian I haven't heard of anyone having any performance degradation from enabling profiling.
07:12 JoeJulian I know that in my own testing I haven't seen a difference.
07:15 Slydder so you just leave profiling running all the time?
07:15 JoeJulian yep
07:16 ovaistariq joined #gluster
07:20 post-factum JoeJulian: same here. we user profiling to draw speed(block_size) charts with no performance penalty
07:20 post-factum s/user/used/
07:20 glusterbot What post-factum meant to say was: JoeJulian: same here. we used profiling to draw speed(block_size) charts with no performance penalty
07:20 Slydder are there any settings I should watch out for when running profiling all the time?
07:21 JoeJulian I'm trying for some snarky answer, but I think I'm too tired.
07:21 Slydder lol
07:21 JoeJulian So no.
07:21 Slydder ok
07:21 post-factum latency
07:21 Slydder just woke up myself so am in the same boat.
07:21 JoeJulian I'm on my way to bed.
07:22 Slydder post-factum: was refering to profiling settings that may cause problems if not set correctly.
07:22 JoeJulian He's just trying to fill in my missing snark. :)
07:22 kshlm Slydder, We just have one profiling setting.
07:22 post-factum umm, then yes, no :)
07:22 Slydder lol
07:22 kshlm On or Off
07:22 Slydder ok
07:23 m0zes joined #gluster
07:32 mhulsman joined #gluster
07:33 mhulsman1 joined #gluster
07:34 hackman joined #gluster
07:35 DaKnOb joined #gluster
07:39 Slydder ok. if I use just "info cumulative" I get the info without the interval blocks for each brick
07:39 [Enrico] joined #gluster
07:40 skoduri joined #gluster
07:40 hackman joined #gluster
08:01 haomaiwa_ joined #gluster
08:13 gem joined #gluster
08:21 ayma joined #gluster
08:32 deniszh joined #gluster
08:36 btpier joined #gluster
08:38 overclk joined #gluster
08:39 sloop joined #gluster
08:40 Pintomatic joined #gluster
08:40 lezo joined #gluster
08:41 sc0 joined #gluster
08:42 tyler274 joined #gluster
08:43 twisted` joined #gluster
08:44 scubacuda joined #gluster
08:44 PotatoGim joined #gluster
08:50 RameshN joined #gluster
08:53 lh joined #gluster
08:55 jri joined #gluster
08:56 ctria joined #gluster
08:59 spalai joined #gluster
09:01 haomaiwang joined #gluster
09:03 unlaudable joined #gluster
09:10 nthomas joined #gluster
09:14 RameshN joined #gluster
09:14 madnexus joined #gluster
09:17 ovaistariq joined #gluster
09:17 jiffin1 joined #gluster
09:17 muneerse joined #gluster
09:18 Slashman joined #gluster
09:19 hchiramm joined #gluster
09:20 jiffin joined #gluster
09:23 muneerse joined #gluster
09:24 ghenry joined #gluster
09:31 mhulsman joined #gluster
09:32 mhulsman1 joined #gluster
09:44 robb_nl joined #gluster
09:45 ayma joined #gluster
09:51 Akee joined #gluster
10:00 hchiramm joined #gluster
10:01 haomaiwa_ joined #gluster
10:01 ninkotech joined #gluster
10:01 ninkotech_ joined #gluster
10:07 skoduri_ joined #gluster
10:09 kshlm joined #gluster
10:10 harish_ joined #gluster
10:14 vmallika joined #gluster
10:23 RayTrace_ joined #gluster
10:26 tallmocha joined #gluster
10:30 tallmocha joined #gluster
10:30 tallmocha left #gluster
10:34 ndarshan joined #gluster
10:34 vmallika joined #gluster
10:37 Javezim joined #gluster
10:37 tallmocha joined #gluster
10:40 tallmocha Hi, recently I added two new bricks to a Distributed-Replicate volume (3.7.8) for the last few days it's been rebalancing the data fine. But I have noticed the last 6 hours or so no more data is being moved and the skipped count keeps going up.
10:41 tallmocha Any ideas on what this could be, I can see on the new bricks an empty file is being created but no data is being moved into it?
10:42 tallmocha I see "data movement attempted from node (vol2-replicate-0:572739952) with higher disk space to a node (vol2-replicate-1:570672192) with lesser disk space" in the logs, but there is no "completed migration" message after
10:43 nthomas joined #gluster
10:56 baojg joined #gluster
11:01 haomaiwa_ joined #gluster
11:05 spalai joined #gluster
11:07 Javezim Hey all, Currently have a 3.7.6 Distributed-Replicate Situation setup on Ubuntu 14.04. We have a 5x5 situation and were noticing that the Bricks often have different size/date files between them, which overall corrupts them in the Gluster Client so they show in red and as ??????????. It's quite frequent now and it seems like the self-heal and balance part of gluster is failing. Does anyone have any ideas as to why the bricks are falling out
11:07 Javezim of sync and ways we can get them back in sync?
11:16 bluenemo joined #gluster
11:18 ovaistariq joined #gluster
11:18 post-factum @split brain
11:18 glusterbot post-factum: To heal split-brains, see https://github.com/gluster/glusterfs/blob/master/doc/features/heal-info-and-split-brain-resolution.md . Also see splitmount https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/ . For additonal information, see this older article https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
11:18 post-factum Javezim: ^^
11:19 wnlx joined #gluster
11:19 arcolife joined #gluster
11:23 Javezim post-factum - is there a command that allows the bricks to heal the files based on the last modified date of the file?
11:23 Javezim Eg if in Brick 1 on Machine 1 has a file with 27th Jan and Brick 1 on Machine 2 has a file with 28th Jan
11:23 Javezim the 28th one goes over to machine 1
11:30 jwd joined #gluster
11:30 jwaibel joined #gluster
11:32 mhulsman joined #gluster
11:38 chirino joined #gluster
11:40 post-factum umm not sure
11:40 EinstCrazy joined #gluster
11:41 post-factum but you may specify preferred brick while healing the file
11:41 post-factum https://gluster.readthedocs.org/en/release-3.7.0/Features/heal-info-and-split-brain-resolution/
11:41 glusterbot Title: heal info and split brain resolution - Gluster Docs (at gluster.readthedocs.org)
11:41 post-factum "Select one replica as source for a particular file" section
11:42 ira joined #gluster
11:44 Javezim Ahhhh, see the problem I've found is that there is no real correct brick. Both will have equal amounts of correct files and equal amount of incorrect files.
11:46 Javezim We know for sure though (Due to the type of the file) that the latest one is always correct
11:47 skoduri_ joined #gluster
11:48 morse joined #gluster
11:48 spalai joined #gluster
11:48 kshlm joined #gluster
11:50 spalai left #gluster
11:53 mhulsman1 joined #gluster
11:56 nehar joined #gluster
11:56 post-factum Javezim: I believe some scripting should be involved to traverse through file list and make a decision
12:01 haomaiwa_ joined #gluster
12:05 Gnomethrower joined #gluster
12:07 nehar joined #gluster
12:08 Manikandan joined #gluster
12:09 karnan joined #gluster
12:10 jiffin tallmocha: it is because newly added disk(target) has low space than the previous one(source) hence it failed migration.
12:10 jiffin u can try force in the rebalance option at the end
12:11 jiffin then data will migrate accordingly
12:13 vmallika joined #gluster
12:17 gem joined #gluster
12:18 prasanth joined #gluster
12:18 RayTrace_ joined #gluster
12:30 johnmilton joined #gluster
12:31 robb_nl joined #gluster
12:32 RayTrace_ joined #gluster
12:32 arcolife joined #gluster
12:33 tallmocha jiffin: hmm, but the new disk set is exctly half the size of old existing one. I thought in this version of gluster it will balance the files taking into account the brick sizes. My calculations show that the new bricks is 10% under used and the old bricks are 10% over used still.
12:37 Chinorro joined #gluster
12:41 tallmocha jiffin: there is 853GB used currently, the whole volume is 1.4T. The old bricks (1TB) currently using 659GB 71% and the new bricks (500GB) are using 194GB 42%. I would expect the new bricks to have 284GB and the old ones 568GB at the end of the rebalance.
12:42 shubhendu joined #gluster
12:43 ndarshan joined #gluster
12:46 ctria joined #gluster
12:48 arcolife joined #gluster
12:48 prasanth joined #gluster
12:55 DaKnOb_ joined #gluster
12:55 GreatSnoopy joined #gluster
12:55 GreatSnoopy hello all
12:57 GreatSnoopy how is the proper way to add a new brick to a replicated volume ? I did a "gluster volume add-brick bpi replica 2 dln-02:/srv/gluster/brick01/bpi force" which said "volume add-brick: success" but after that I would expect the new brick to start getting content, however this simply does not happen
12:57 GreatSnoopy did I miss some step in the docs to trigger the resilvering of the new brick ?
13:01 post-factum GreatSnoopy: you should launch rebalance explicitly
13:01 haomaiwa_ joined #gluster
13:01 post-factum @rebalance
13:01 glusterbot post-factum: I do not know about 'rebalance', but I do know about these similar topics: 'replace'
13:01 GreatSnoopy tried that, said "volume rebalance: bpi: failed: Volume bpi is not a distribute volume or contains only 1 brick.
13:01 GreatSnoopy "
13:02 post-factum ah, *replica*
13:02 GreatSnoopy yeah
13:02 post-factum well, there is a heal for that :)
13:02 chirino joined #gluster
13:02 GreatSnoopy Launching heal operation to perform index self heal on volume bpi has been successful
13:02 GreatSnoopy
13:02 GreatSnoopy but nothing happens
13:03 GreatSnoopy (tried that too :D )
13:03 post-factum gluster volume heal bpi info
13:03 mhulsman joined #gluster
13:03 post-factum @paste
13:03 glusterbot post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
13:04 RameshN joined #gluster
13:04 GreatSnoopy no direct connection for netcat, moment
13:04 mhulsman1 joined #gluster
13:05 GreatSnoopy http://pastebin.com/dzhtK9wm
13:05 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
13:05 GreatSnoopy hm
13:05 GreatSnoopy ok
13:05 GreatSnoopy http://fpaste.org/336672/76151301/
13:05 glusterbot Title: #336672 Fedora Project Pastebin (at fpaste.org)
13:06 arcolife joined #gluster
13:06 post-factum what if you mount your volume under, saying, /mnt/bpi and launch "find /mnt/bpi -print"?
13:07 GreatSnoopy if i mount dln-01:/bpi (initial node) i can see all the files, if i mount dln-02 (new node) i can see it with occupied space but no files
13:08 mbukatov joined #gluster
13:08 post-factum umm
13:08 Slydder so. just finished the first version of a diamond collector for glusterfs.
13:08 post-factum gluster peer status please
13:09 GreatSnoopy http://fpaste.org/336675/76153571/
13:09 glusterbot Title: #336675 Fedora Project Pastebin (at fpaste.org)
13:09 DaKnOb joined #gluster
13:10 post-factum looks good
13:10 GreatSnoopy that what i thought, too :)
13:10 post-factum how many files do you have on volume?
13:10 GreatSnoopy LOTS
13:10 sebamontini joined #gluster
13:10 ppai joined #gluster
13:11 post-factum GreatSnoopy: probably, you have to wait for them to be scanned
13:11 post-factum GreatSnoopy: launch htop or top and find glustershd there
13:11 post-factum does it consume some cpu?
13:12 GreatSnoopy i only have a "/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/18deb641a5b7c32284ef20a59808f590.socket --xlator-option *replicate*.node-uuid=08a2a9a1-63f2-4e05-831f-580af2dea106"
13:12 GreatSnoopy but it does not consume any cpu
13:13 GreatSnoopy if it did i would have assumed its working
13:13 GreatSnoopy also, I/O on storage is basically 0
13:13 poornimag joined #gluster
13:14 post-factum ok, what about firewalling on both nodes?
13:15 GreatSnoopy same subnet, no firewall
13:15 GreatSnoopy this is what i have in the glustershd.log on the second, NEW node :
13:15 ndarshan joined #gluster
13:15 GreatSnoopy [2016-03-10 12:47:14.744433] I [client-handshake.c:1200:client_setvolume_cbk] 0-bpi-client-1: Connected to bpi-client-1, attached to remote volume '/srv/gluster/brick01/bpi'.
13:15 GreatSnoopy [2016-03-10 12:47:14.744463] I [client-handshake.c:1210:client_setvolume_cbk] 0-bpi-client-1: Server and Client lk-version numbers are not same, reopening the fds
13:15 GreatSnoopy [2016-03-10 12:47:14.744682] I [client-handshake.c:188:client_set_lk_version_cbk] 0-bpi-client-1: Server lk version = 1
13:15 GreatSnoopy [2016-03-10 12:50:16.820177] I [afr-self-heald.c:731:afr_shd_full_healer] 0-bpi-replicate-0: starting full sweep on subvol bpi-client-1
13:15 glusterbot GreatSnoopy: This is normal behavior and can safely be ignored.
13:15 GreatSnoopy [2016-03-10 12:50:16.821248] I [afr-self-heald.c:741:afr_shd_full_healer] 0-bpi-replicate-0: finished full sweep on subvol bpi-client-1
13:16 RayTrace_ joined #gluster
13:17 GreatSnoopy i even tried  remove brick, mkfs on the storage disk and re-add
13:17 nthomas joined #gluster
13:17 GreatSnoopy same result
13:20 post-factum gluster volume info bpi
13:20 post-factum gluster volume status bpi
13:22 GreatSnoopy http://fpaste.org/336680/16123145/
13:22 glusterbot Title: #336680 Fedora Project Pastebin (at fpaste.org)
13:23 post-factum gluster version?
13:23 GreatSnoopy glusterfs-server-3.6.9-1.el6.x86_64
13:23 GreatSnoopy ...centos 6
13:23 post-factum i'm out of ideas right now
13:24 post-factum try to check other logs an new node
13:24 post-factum except glustershd.log
13:24 LessSeen_ joined #gluster
13:24 post-factum s/an/at/
13:24 glusterbot What post-factum meant to say was: try to check other logs at new node
13:27 GreatSnoopy question
13:27 GreatSnoopy when a new node is added
13:27 GreatSnoopy who initiates data transfer
13:27 GreatSnoopy an existing node towards the new nodes
13:27 GreatSnoopy or the other way around
13:28 GreatSnoopy so that i know where to look, in first node's glustershd.log or in the new node
13:28 post-factum we need to ask devs about that
13:28 nehar joined #gluster
13:28 post-factum but it is better to check both nodes logs
13:30 GreatSnoopy meanwhile i have found another possible issue : gluster processes log messages ignoring timezone info :) that's a small issue tho
13:30 ayma joined #gluster
13:30 post-factum it shouldn't have any impact
13:30 post-factum my logs do the same
13:31 jiffin GreatSnoopy: for replicated volumes , data transfer transfer can initiated by guys - self-heal daemon and client process
13:32 GreatSnoopy jiffin, i was asking who connects to who to get fata for the new brick so that i know where to look
13:33 jiffin GreatSnoopy: nodes won't talk to each other
13:34 GreatSnoopy how is data resync-ed when adding a new brick then ?
13:34 jiffin GreatSnoopy: it can be initiated by two ways
13:34 tswartz joined #gluster
13:34 jiffin GreatSnoopy: from client if a lookup is performed on a file
13:35 ppai joined #gluster
13:35 Slashman joined #gluster
13:35 jiffin and found it is missing on the new node, then it copies data from old to new
13:36 jiffin GreatSnoopy: in the second method through self heal daemon
13:36 GreatSnoopy good, so how do i convince the self heal daemon to actually do what it should do ?
13:37 jiffin heal cli command
13:37 GreatSnoopy i should run this on the first (existing ) node or on the NEW node ?
13:37 jiffin and I am not an expert on this
13:37 plarsen joined #gluster
13:38 jiffin GreatSnoopy: it does not matter IMO
13:38 GreatSnoopy well, already tried that and nothing seems to happen
13:38 jiffin u can ran gluster cli command from any node
13:38 GreatSnoopy i just did it from the new node
13:38 GreatSnoopy says
13:39 GreatSnoopy Launching heal operation to perform index self heal on volume bpi has been successful
13:39 GreatSnoopy Use heal info commands to check status
13:39 jiffin GreatSnoopy: u can check heal status  command and  /var/log/glusterfs/glustershd.log for more details
13:40 GreatSnoopy unfortunately there is nothing userful there, just a moment i'll paste that
13:40 gem joined #gluster
13:41 RayTrace_ joined #gluster
13:43 GreatSnoopy http://fpaste.org/336690/61737714/
13:43 glusterbot Title: #336690 Fedora Project Pastebin (at fpaste.org)
13:43 GreatSnoopy nothing looks odd to me there
13:43 GreatSnoopy there is also glfsheal-bpi.log in http://fpaste.org/336692/61742914/
13:43 jiffin GreatSnoopy: u can send mail to gluster-devel/gluster-usr list which large participation of developers
13:43 glusterbot Title: #336692 Fedora Project Pastebin (at fpaste.org)
13:44 newbeen joined #gluster
13:44 GreatSnoopy jiffin, is it a closed mailing list (i have to join first ? )
13:45 jiffin GreatSnoopy: it is not closed list
13:45 jiffin u can join if u want
13:46 newbeen Hello guys I have a question regarding the gluster implementation, will it work with RDMA under Ethernet?
13:47 jiffin GreatSnoopy: can u try one morething, do a explicit lookup on a file from client and check whether file is healed
13:49 jiffin newbeen:sorry I am not clear, what did u meant by RDMA under Ethernet?
13:49 jiffin newbeen: gluster has RDMA support
13:50 nbalacha joined #gluster
13:50 GreatSnoopy jiffin, doing this now
13:50 GreatSnoopy moment
13:50 ivan_rossi joined #gluster
13:51 newbeen I just got some Mellanox 10Gb network card with rdma support, will that work with the gluster implementation for RDMA or is attached to Infiniband?
13:51 unclemarc joined #gluster
13:52 DV joined #gluster
13:53 DV_ joined #gluster
13:53 ira joined #gluster
13:54 newbeen I just got my answer in this article http://www.unlocksmith.org/2009/11/infiniband-10gige-and-glusterfs.html it seems like it support RDMAoE
13:54 glusterbot Title: Unlocksmith: Infiniband, 10GigE and GlusterFS (at www.unlocksmith.org)
13:57 kshlm joined #gluster
13:58 shyam joined #gluster
14:00 GreatSnoopy it seems that if i mount the second node, i can acces the files. BUT it seems that the client gets the  files from the first node
14:00 GreatSnoopy because on the second node, there is still one  MB of space occupied  :))
14:01 jiffin newbeen: i am not sure, u can give try , all the rdma related tests were performed on Infiniband
14:02 jiffin GreatSnoopy: glusterfs client will be connected to all the servers in the storage pool
14:02 DV joined #gluster
14:02 GreatSnoopy well, my issue is that now i have a second node but data is not mirrored, if the first node crashes the data is gone
14:02 jiffin the server u mentioned is used by the client to fetch the vol file (configuration file) and it is one time process
14:03 jiffin GreatSnoopy: u are using replciated volume rigth
14:03 jiffin can u provide the vol info
14:04 GreatSnoopy i'll paste it yet again, just a second
14:04 haomaiwang joined #gluster
14:04 kshlm joined #gluster
14:07 GreatSnoopy http://fpaste.org/336711/61882014/
14:07 glusterbot Title: #336711 Fedora Project Pastebin (at fpaste.org)
14:08 jiffin GreatSnoopy: it's better u send a mail to the list, with all the steps u performed
14:08 jiffin GreatSnoopy: I am out of ideas
14:10 post-factum jiffin++ thanks for trying
14:10 glusterbot post-factum: jiffin's karma is now 3
14:12 bennyturns joined #gluster
14:13 newbeen jiffin: thank you for the reply, I will give it a try both enable/disable an see it it is working fine!
14:14 sebamontini joined #gluster
14:15 Apeksha joined #gluster
14:16 coredump joined #gluster
14:26 ninjaryan joined #gluster
14:29 monotek1 joined #gluster
14:29 DaKnOb joined #gluster
14:32 tallmocha joined #gluster
14:32 Jules- joined #gluster
14:35 Jules- anyone ever had such error message while chowning an directory over nfs: chown: changing ownership of ‘directory/file.zip’: Input/output error ?
14:35 skylar joined #gluster
14:35 XpineX joined #gluster
14:35 ninjaryan joined #gluster
14:37 ggarg joined #gluster
14:37 robb_nl joined #gluster
14:39 harish_ joined #gluster
14:40 nehar joined #gluster
14:42 Marbug when you add a node to glusterfs, can you execute all the comands to replicate the server on the new node, or do you need to do all those commands on the onld node where you want to replicate form ?
14:49 hagarth joined #gluster
14:52 yalu left #gluster
14:53 Marbug nvm it needs to be on the first host then :)
14:53 aravindavk joined #gluster
14:56 amye joined #gluster
14:59 shyam joined #gluster
15:00 nthomas joined #gluster
15:01 haomaiwa_ joined #gluster
15:02 rwheeler joined #gluster
15:02 kshlm joined #gluster
15:05 jwd joined #gluster
15:23 bennyturns joined #gluster
15:26 shyam joined #gluster
15:35 jwaibel joined #gluster
15:41 d0nn1e joined #gluster
15:42 gem joined #gluster
15:45 deniszh joined #gluster
15:48 farhorizon joined #gluster
15:49 RameshN joined #gluster
15:51 calavera joined #gluster
15:53 ayma joined #gluster
15:54 ninjaryan joined #gluster
16:00 mhulsman joined #gluster
16:01 RameshN joined #gluster
16:01 haomaiwa_ joined #gluster
16:02 DaKnOb joined #gluster
16:11 ninjaryan joined #gluster
16:13 RameshN joined #gluster
16:14 farhorizon joined #gluster
16:17 LessSeen_ joined #gluster
16:31 RameshN joined #gluster
16:32 kshlm joined #gluster
16:32 coredump joined #gluster
16:33 shubhendu joined #gluster
16:35 pur joined #gluster
16:36 RameshN_ joined #gluster
16:38 ninjaryan joined #gluster
16:40 deniszh1 joined #gluster
16:41 robb_nl joined #gluster
16:45 nathwill joined #gluster
16:46 skoduri joined #gluster
16:48 spalai joined #gluster
16:50 karnan joined #gluster
16:59 B21956 joined #gluster
16:59 DV joined #gluster
17:03 EinstCrazy joined #gluster
17:10 ovaistariq joined #gluster
17:12 ovaistar_ joined #gluster
17:21 overclk joined #gluster
17:23 rafi1 joined #gluster
17:23 haomaiwa_ joined #gluster
17:25 Slydder https://wiki.itadmins.net/filesystems/glusterfs_diamond_metrics
17:25 glusterbot Title: GlusterFS Diamond Collector [IT Admins Group] (at wiki.itadmins.net)
17:31 Merlin_ joined #gluster
17:32 Merlin_ joined #gluster
17:36 mhulsman joined #gluster
17:42 hchiramm joined #gluster
17:49 Wizek joined #gluster
17:50 sebamontini joined #gluster
17:50 unlaudable joined #gluster
17:52 jiffin joined #gluster
17:53 overclk joined #gluster
17:53 jri joined #gluster
17:55 jri joined #gluster
17:56 Wizek_ joined #gluster
18:02 DaKnOb joined #gluster
18:03 jiffin joined #gluster
18:06 ivan_rossi left #gluster
18:10 hackman joined #gluster
18:11 Merlin_ joined #gluster
18:11 Merlin_ joined #gluster
18:12 Merlin_ joined #gluster
18:14 Hamburglr joined #gluster
18:15 Hamburglr does "gluster volume heal myvol1 info split-brain" produce a lot of I/O checking all files or is there already a list it has and simply outputs it?
18:17 overclk Hamburglr: it picks up the list from <brick>/.glusterfs/index
18:17 rafi joined #gluster
18:18 overclk Hamburglr: hold on, I misread it as heal info..
18:28 calavera joined #gluster
18:28 RayTrace_ joined #gluster
18:29 syadnom_ joined #gluster
18:29 sebamontini joined #gluster
18:33 pur joined #gluster
18:38 ninjaryan joined #gluster
18:38 Merlin_ joined #gluster
18:40 ahino joined #gluster
18:41 Hamburglr overclk: were you able to find out?
18:42 overclk Hamburglr: I didn't check, doing something else..
18:44 Hamburglr overclk: no problem, if you get a chance whenever I'll be around and would appreciate it
18:44 overclk Hamburglr: give me sometime, If I'm around I'll look it up.
18:48 mhulsman joined #gluster
18:49 overclk Hamburglr: It does look to be figuring it out for the indexes (tracking) directory
18:50 overclk s/for/from/
18:50 glusterbot What overclk meant to say was: Hamburglr: It does look to be figuring it out from the indexes (tracking) directory
18:55 Merlin_ joined #gluster
19:04 kevein joined #gluster
19:05 Merlin_ joined #gluster
19:07 Hamburglr overclk: sorry, does that mean it reads over every file/directory or no?
19:07 farhorizon joined #gluster
19:08 haomaiwang joined #gluster
19:08 overclk Hamburglr: just iterates the list, i.e. no filesystem scan as such.
19:09 Hamburglr cool thanks
19:15 RayTrace_ joined #gluster
19:15 tswartz joined #gluster
19:17 LessSeen_ joined #gluster
19:23 rwheeler joined #gluster
19:27 Merlin_ joined #gluster
19:29 deniszh joined #gluster
19:31 RayTrace_ joined #gluster
19:34 csterling joined #gluster
19:34 Philambdo1 joined #gluster
19:35 csterling Hey @JoeJulian
19:35 ninjaryan joined #gluster
19:37 csterling I’ve been looking at a script you recommended - https://github.com/gluster/glusterfs/blob/master/extras/rebalance.py - but I can’t figure out how to use it - it lookes like maybe I take the values it outputs and stick them somewhere, but my n00b-ish-ness is getting in the way of that
19:37 glusterbot Title: glusterfs/rebalance.py at master · gluster/glusterfs · GitHub (at github.com)
19:38 Merlin_ joined #gluster
19:45 csterling I ended up doing this: http://blog.angits.net/2014/04/glusterfs-rebalance-weight-based-wip/
19:48 B21956 joined #gluster
19:48 ninjaryan joined #gluster
19:50 Merlin_ joined #gluster
19:53 GreatSnoopy joined #gluster
19:59 DaKnOb joined #gluster
20:03 farhoriz_ joined #gluster
20:07 Merlin_ joined #gluster
20:08 mhulsman joined #gluster
20:10 sebamontini joined #gluster
20:11 DV joined #gluster
20:15 Merlin_ joined #gluster
20:18 RayTrace_ joined #gluster
20:25 andy-b joined #gluster
20:29 ninjaryan joined #gluster
20:36 andy-b joined #gluster
20:43 shyam joined #gluster
20:52 Merlin_ joined #gluster
20:54 csterling left #gluster
20:57 jkroon joined #gluster
21:02 Merlin_ joined #gluster
21:03 calavera joined #gluster
21:14 Merlin_ joined #gluster
21:24 R0ok_ joined #gluster
21:27 Merlin_ joined #gluster
21:27 Merlin_ joined #gluster
21:28 Merlin_ joined #gluster
21:33 jkroon hi all, after executing a replace-brick it seems the data is not being properly synced to the new brick.  running gluster 3.7.8 ... expecting about 21G of data to go to the brick, but it seems instead data is being deleted from the other replica, and only about 650MB of data got moved onto the new brick ... ?
21:34 jkroon what's the correct procedure for replacing a brick (goal is to migrate data onto new servers)
21:38 ggarg joined #gluster
21:39 haomaiwa_ joined #gluster
21:41 owlbot joined #gluster
21:41 deniszh joined #gluster
21:48 mpietersen joined #gluster
21:58 R0ok_ joined #gluster
21:59 mpietersen joined #gluster
22:01 mpietersen joined #gluster
22:02 ghenry joined #gluster
22:03 mpietersen joined #gluster
22:04 mpietersen joined #gluster
22:05 ggarg joined #gluster
22:05 mpietersen joined #gluster
22:05 RayTrace_ joined #gluster
22:07 DV joined #gluster
22:07 mpietersen joined #gluster
22:07 ayma1 joined #gluster
22:09 ayma2 joined #gluster
22:09 mpietersen joined #gluster
22:11 mpietersen joined #gluster
22:13 R0ok_ joined #gluster
22:13 mpieters_ joined #gluster
22:15 mpietersen joined #gluster
22:16 mpietersen joined #gluster
22:22 DaKnOb joined #gluster
22:33 hagarth joined #gluster
23:07 hackman joined #gluster
23:12 shyam joined #gluster
23:15 hackman joined #gluster
23:33 ovaistariq joined #gluster
23:44 amye joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary