Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 ashiq_ joined #gluster
00:19 vmallika joined #gluster
00:28 Neilo joined #gluster
00:34 Hesulan joined #gluster
00:36 ovaistariq joined #gluster
00:48 camg joined #gluster
01:12 ggarg joined #gluster
01:13 EinstCrazy joined #gluster
01:31 semiosis joined #gluster
01:31 raginbajin joined #gluster
01:32 p8952 joined #gluster
01:33 shaunm joined #gluster
01:33 necrogami joined #gluster
01:33 Hesulan joined #gluster
01:33 PaulePanter joined #gluster
01:34 EinstCrazy joined #gluster
01:38 DV__ joined #gluster
01:38 chirino_m joined #gluster
01:40 ovaistariq joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:05 ovaistariq joined #gluster
02:10 Lee1092 joined #gluster
02:18 julim joined #gluster
02:35 camg joined #gluster
02:42 chirino joined #gluster
03:00 ovaistariq joined #gluster
03:15 baojg joined #gluster
03:16 jhyland joined #gluster
03:18 nishanth joined #gluster
03:19 ovaistariq joined #gluster
03:37 overclk joined #gluster
03:42 mowntan joined #gluster
03:42 vmallika joined #gluster
03:42 chirino joined #gluster
03:46 vmallika joined #gluster
03:55 nehar joined #gluster
03:58 scoban joined #gluster
04:01 shubhendu joined #gluster
04:02 itisravi joined #gluster
04:04 nbalacha joined #gluster
04:08 RameshN joined #gluster
04:09 shubhendu joined #gluster
04:10 gem joined #gluster
04:13 camg joined #gluster
04:14 kanagaraj joined #gluster
04:19 camg joined #gluster
04:19 atinm joined #gluster
04:20 jiffin joined #gluster
04:20 ovaistariq joined #gluster
04:28 d0nn1e joined #gluster
04:36 ppai joined #gluster
04:41 hgowtham joined #gluster
04:49 kshlm joined #gluster
04:51 sakshi joined #gluster
04:54 karthik___ joined #gluster
04:58 gowtham joined #gluster
05:06 ndarshan joined #gluster
05:06 prasanth joined #gluster
05:08 EinstCrazy joined #gluster
05:09 Manikandan joined #gluster
05:10 bcdonadio joined #gluster
05:11 bcdonadio when adding a new replica to an old volume, how should I start the replication?
05:14 poornimag joined #gluster
05:20 Apeksha joined #gluster
05:21 nishanth joined #gluster
05:23 aspandey joined #gluster
05:26 itisravi bcdonadio: are you increasing the replica count? what version of gluster?
05:28 bcdonadio itisravi: yes, I have 2 replicas, and want to add a third one, Gluster 3.7.9
05:30 itisravi bcdonadio: `gluster vol heal volname full` should do it
05:30 itisravi After adding the brick, that is.
05:31 bcdonadio itisravi: how do I know it is finished? what should I see in the heal info?
05:32 itisravi bcdonadio: heal info should show zero entries eventually . You can also check the brick sizes in the backend.
05:33 bcdonadio ok, thanks ^^
05:34 rafi joined #gluster
05:39 mhulsman joined #gluster
05:41 Bhaskarakiran joined #gluster
05:44 chirino joined #gluster
05:46 pur joined #gluster
05:47 spalai joined #gluster
05:47 aravindavk joined #gluster
05:48 karnan joined #gluster
05:52 Bhaskarakiran joined #gluster
05:55 skoduri joined #gluster
05:55 camg receiving a lot of error messages in brick logs: http://fpaste.org/347286/raw/
05:55 camg logs are nice and specific but hard to decipher
05:59 EinstCra_ joined #gluster
06:02 ggarg joined #gluster
06:06 haomaiwa_ joined #gluster
06:07 anil_ joined #gluster
06:08 ovaistariq joined #gluster
06:10 spalai left #gluster
06:10 kotreshhr joined #gluster
06:11 spalai joined #gluster
06:13 nbalacha joined #gluster
06:13 Wizek joined #gluster
06:20 kdhananjay joined #gluster
06:22 bcdonadio when trying to add a replica and issuing a full heal on the volume, nothing happens except that the clients now mount an empty volume, what could be happening?
06:24 bcdonadio gluster volume heal gv-id info shows 0 entries to be healed, and trying to enable self-heal claims that some brick process is down, despite no one having crashed
06:24 karthik___ joined #gluster
06:29 harish__ joined #gluster
06:30 shubhendu joined #gluster
06:35 ramky joined #gluster
06:39 spalai left #gluster
06:40 Bhaskarakiran joined #gluster
06:42 sakshi joined #gluster
06:46 Wizek joined #gluster
06:47 ramky joined #gluster
06:49 sakshi joined #gluster
06:50 rwheeler joined #gluster
06:51 atalur joined #gluster
06:58 itisravi joined #gluster
06:59 [Enrico] joined #gluster
07:00 baojg joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 camg bcdonadio: Have you tried force starting the volume?  I have seen that suggested to ensure the brick processes are actually running.
07:02 itisravi bcdonadio: see if the workaround listed in http://www.spinics.net/lists/gluster-users/msg25326.html helps
07:02 glusterbot Title: Re: Unexpected behaviour adding a third server Gluster Users (at www.spinics.net)
07:04 deniszh joined #gluster
07:04 bcdonadio camg: yes, the brick processes are running on all nodes without errors
07:04 bcdonadio itisravi: I will try, just a minute
07:06 sakshi joined #gluster
07:07 fattaneh joined #gluster
07:08 bcdonadio itisravi: in the second step, "kill the 3rd brick" means removing the brick from the volume, or just killing the brick process?
07:08 itisravi joined #gluster
07:13 bcdonadio itisravi: in the second step, "kill the 3rd brick" means removing the brick from the volume, or just killing the brick process?
07:14 [diablo] joined #gluster
07:15 jri joined #gluster
07:16 mhulsman joined #gluster
07:17 camg bcdonadio: it reads as just the process
07:18 spalai joined #gluster
07:18 spalai left #gluster
07:22 bcdonadio itisravi: after following those instructions, healing continues not to work
07:22 bcdonadio if I run gluster volume heal gv-volume info healed it gives: Gathering list of healed entries on volume gv-letsencrypt has been unsuccessful on bricks that are down. Please check if all brick processes are running.
07:24 scobanx I am getting I [dict.c:473:dict_get] (-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) [0x7fc2f5b9dc2c] -->/usr/lib64/glusterfs/3.7.9/xlator/features/marker.so(marker_getxattr_cbk+0xa7) [0x7fc2e224b857] -->/lib64/libglusterfs.so.0(dict_get+0xac) [0x7fc2f5b8e06c] ) 0-dict: !this || key=() ]invalid argument]
07:24 glusterbot scobanx: ('s karma is now -128
07:24 scobanx in brick logs
07:25 scobanx Can I prevent this? Is it a problem?
07:26 ndevos gbox: if bind mounts do not work, I would recommend you to file a bug with exact steps (or a script) to reproduce the issue
07:26 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
07:26 ndevos gbox: and yes, if that log is the only result you get, it looks like a bug to me :)
07:27 camg ndevos:  Thanks for following up!
07:28 * ndevos o_O
07:28 camg ndevos: At home so different account
07:28 _nixpanic camg: yeah, I understand!
07:29 rouven joined #gluster
07:29 camg ndevos: nice one!  Too bad though I was hoping for a simple answer.
07:30 camg Does DHT use the pathname, filename, ?
07:30 ndevos camg: actually I thought it was working... pretty sure someone reported that at one point
07:30 camg ndevos: It does *work*
07:31 ndevos camg: DHT uses the filename, and some xattr that is set in the parent directory
07:31 camg ndevos: Interesting, so if a directory is moved its contents don't scatter across nodes
07:32 ctria joined #gluster
07:33 ndevos camg: indeed, the contents of the directory stay where they are, they do not notice the parent dir moved
07:36 ahino joined #gluster
07:36 fsimonce joined #gluster
07:38 32NAAOXD9 joined #gluster
07:41 btspce joined #gluster
07:43 btspce What default settings for a volume do you change for a distributed-replicated setup for large qcow files? Do you optimze for large or small files?
07:45 chirino joined #gluster
07:48 ppai joined #gluster
07:50 nishanth joined #gluster
07:56 ovaistariq joined #gluster
07:57 ivan_rossi joined #gluster
07:58 morse joined #gluster
08:01 haomaiwang joined #gluster
08:05 scobanx Hi, with a 1560 brick 60 node cluster, 'gluster v status v0 clients' command gives Error: request timed out
08:06 scobanx I never made it work, anything to do?
08:10 camg scobanx: You could manually do what that cli command does.  Do you have ansible, salt, or a simple ssh loop available?
08:11 Skaag I did not yet put any data in a volume, can I still switch it to 'distribute'?
08:11 Skaag (from 'replicate')
08:15 camg I see many [client-rpc-fops.c:969:client3_3_flush_cbk] 0-gv0-client-2: remote operation failed [Transport endpoint is not connected]
08:15 ppai joined #gluster
08:15 EinstCra_ joined #gluster
08:15 camg Skaag: Yeah just start over (delete the volume & recreate)?
08:16 Skaag that's what I just did
08:16 mhulsman joined #gluster
08:16 camg Skaag: nice
08:16 Skaag I assumed everybody here was sleeping and I just experimented
08:16 Skaag I used 'stripe 5', too
08:16 Skaag hopefully it will be OK :)
08:16 Skaag (there are 5 bricks)
08:16 ivan_rossi left #gluster
08:17 camg Skaag: Not sure about stripe: https://joejulian.name/blog/should-i-use-stripe-on-glusterfs/
08:17 glusterbot Title: Should I use Stripe on GlusterFS? (at joejulian.name)
08:18 camg Skaag: Distributed and stripe achieve the same goal but with very different approaches and consequences.  Or is there another stripe option?
08:18 camg I should be asleep
08:18 Slashman joined #gluster
08:19 camg scobanx: Actually for that command that would be hard to manually script.  Where does gluster even get that info?
08:20 frakt joined #gluster
08:20 Skaag I should be asleep myself
08:20 Skaag 1:20am....... and, I'm exhausted
08:21 Skaag when using root fs bricks, will the volume grow when I clear up space?
08:21 camg Skaag: Hey west coast north america!  I'm in LA.
08:21 Skaag I'm in LA myself
08:22 camg Skaag: nice.  Root fs bricks?  You mean the root partition?
08:22 Skaag yes
08:22 Skaag instead of a dedicated drive for example
08:22 camg Skaag: You've got to at least have a separate partition.
08:22 Skaag I installed this on VM's that have a single volume (SSD based)
08:23 camg Skaag:  Yeah I guessed that might be it.  Is this just to experiment?
08:24 camg Skaag: Are there any gluster meet ups in LA?  It's kind of a Ceph city being their home base & all.
08:25 frakt joined #gluster
08:26 camg "gluster volume heal volumename info" just hangs.  This is not looking good.
08:27 Skaag not that I know of (the meetups), but I haven't tried looking
08:28 Skaag if you go to one, please let me know
08:28 Skaag I'd love to check it out
08:28 Skaag Ceph wise: It was super complex to setup...
08:28 camg Skaag: On your setup, can't you just create other VM block devices to use?  Or partition the image you install?
08:30 camg Skaag:  It was.  I used to run into Sage Weil at events and it was always under massive development.  But since RH bought both Ceph & Gluster they seem to be on a convergent development course.  Ceph is "more stable" and Gluster is gaining many of the features Ceph has.
08:33 fattaneh1 joined #gluster
08:36 fattaneh1 joined #gluster
08:45 camg How can I restart the self-heal daemon?  It is not running
08:48 camg Simply kill it & restart?
08:49 hackman camg: no, set the volume option
08:51 arcolife joined #gluster
08:51 hackman camg: gluster volume set <VOLNAME> cluster.self-heal-daemon on
08:52 camg hackman: Sure it says it's on but gluster v heal volname statistics shows it hasn't run in over a day
08:52 camg and gluster v heal volname info just hangs
08:53 hackman camg: so set it to off
08:53 hackman then wait for 10sec
08:53 hackman and set it to on again
08:53 Ulrar If there is no trace of a heal in the logs, does it garantee a file wasn't healed ?
08:53 hackman or try: gluster volume heal <VOLNAME> disable
08:53 hackman then enable
08:53 Ulrar Still get very weird corruption, and I can't figure out where it's comming from
08:54 camg hackman: Good suggestion.  The old unplug it & plug it back in method
08:56 Ulrar Oh wow, I started a heal manually and the files now seems okay. How the hell does the file become corrupted while being in use without gluster noticing
08:56 camg self-heal is doing something.  The cli log shows a repetitive [socket.c:2355:socket_event_handler] 0-transport: disconnecting now
08:57 camg Ulrar: was it actual corruption or just metadata inconsistency?
08:58 Ulrar camg: It's VM disk files, and the VM seems to think it's pretty bad :)
08:58 Ulrar The VM goes on readonly, and when I reboot it I get a lot of I/O error
08:58 Ulrar I tried reinstalling on one of those corrupted files and even formatting the virtual disk doesn't work, had to delete and recreate it
08:58 Ulrar So it looks like the actuel file is damaged
08:59 sakshi joined #gluster
08:59 camg Ulrar: Yes I've had that happen.  It doesn't take much with VM disks though.
09:00 scobanx camg: I can do it but after returning timeout that command still tries to run in background and prevent my further gluster v commands too..
09:00 Ulrar camg: Yeah, but we have another cluster with the sames versions actually in production that is running perfectly
09:01 Ulrar This one just keeps getting corrupted over and over, even though there is no traffic on it
09:01 camg Ulrar: wow let me know when you figure out the trick
09:01 haomaiwa_ joined #gluster
09:01 camg Ulrar: It's on a dev server?  Maybe bad hw?
09:01 Ulrar Had the 3 RAID cards changed, and tested the RAM and the CPUs
09:01 camg scobanx:  Restart glusterd?
09:02 Ulrar It's for production, just didn't migrate the application yet since it keeps exploding
09:02 scobanx Yes tried it but again gluster v status {client|ram|..} commands never return...MAybe because of brick count too much?
09:02 camg scobanx:  It's an interesting command but awkward.  It lists everything by the source port?
09:03 camg scobanx: 60, yes?  That is a lot but there must be similar setups out there
09:03 scobanx Yes for every client it prints connected brick. In my case 1560 bricks X 50 clients it is not working somehow..
09:04 scobanx camg: 1560 bricks
09:04 camg scobanx: then again I saw a gluster planning doc for the DHT2.0 that has centralized metadata servers (a la Ceph) to better handle your kind of scaling
09:04 scobanx Workloads are fine I am pretty happ from the performance..But some gluster commands hangs..I need to solve it..
09:05 camg scobanx: so it probably is a known issue
09:05 camg scobanx:  Yeah but the commands are crucial to know what's going on.  What happens when there is a problem?
09:05 scobanx Nothing..it gives some locking errors and timeout errors..
09:06 itisravi joined #gluster
09:06 camg scobanx: Ha, I meant hypothetically but then what are the implications of your actual errors?
09:07 MrAbaddon joined #gluster
09:07 camg Self-heal has ceased working on this volume.  The daemon is running but unresponsive.  Even after glusterd restart
09:07 scobanx Can anyone check the code and tell me if 'gluster v status clients|mem|..' commands works with 1500 brick and 1000 clients connected?
09:08 camg scobanx:  Find someone at Pandora?
09:08 scobanx :) there are some developers here but they sleep now I think..
09:09 camg scobanx:  I meant via email.  Facebook ran into this kind of problem and created their own framework (antfarm)
09:10 scobanx Will ask in mail list too..
09:12 camg Does anyone notice this incessantly from self-heal: [socket.c:2355:socket_event_handler] 0-transport: disconnecting now
09:14 scobanx camg: you can try gluser v start vol_name force to start all shd deamons..
09:16 fattaneh joined #gluster
09:21 itisravi_ joined #gluster
09:21 DV joined #gluster
09:36 jiffin1 joined #gluster
09:36 camg scobanx:  Thanks I don't use that enough!
09:36 camg scobanx:  It's running but unresponsive.  It just hangs.
09:39 camg I think I'm just maxing out an underpowered peer.
09:39 [Enrico] joined #gluster
09:45 ovaistariq joined #gluster
09:45 MrAbaddon joined #gluster
09:53 kotreshhr joined #gluster
09:53 ramky joined #gluster
10:11 jiffin1 joined #gluster
10:11 itisravi joined #gluster
10:16 ramky joined #gluster
10:16 Debloper joined #gluster
10:22 robb_nl joined #gluster
10:30 kotreshhr joined #gluster
10:48 chirino joined #gluster
11:01 Rasathus joined #gluster
11:03 Manikandan joined #gluster
11:05 nathwill joined #gluster
11:05 Manikandan joined #gluster
11:08 spalai joined #gluster
11:10 overclk joined #gluster
11:18 ira joined #gluster
11:23 johnmilton joined #gluster
11:24 DV joined #gluster
11:27 anil_ joined #gluster
11:33 ovaistariq joined #gluster
11:43 mhulsman joined #gluster
11:48 harish__ joined #gluster
11:50 shubhendu joined #gluster
11:51 jiffin joined #gluster
11:52 jdarcy joined #gluster
11:55 kkeithley joined #gluster
11:55 vmallika joined #gluster
11:59 EinstCrazy joined #gluster
12:05 JonathanD joined #gluster
12:06 bluenemo joined #gluster
12:06 robb_nl joined #gluster
12:10 chirino joined #gluster
12:11 Saravanakmr joined #gluster
12:21 skoduri joined #gluster
12:25 [Enrico] joined #gluster
12:26 B21956 joined #gluster
12:31 unclemarc joined #gluster
12:34 Bhaskarakiran joined #gluster
12:52 spalai left #gluster
12:54 RameshN joined #gluster
12:57 jlp1 joined #gluster
13:01 Rasathus_ joined #gluster
13:01 skoduri joined #gluster
13:05 kotreshhr joined #gluster
13:13 post-factum jdarcy++
13:13 glusterbot post-factum: jdarcy's karma is now 1
13:15 [Enrico] joined #gluster
13:20 madnexus_ joined #gluster
13:21 ovaistariq joined #gluster
13:23 madnexus_ hi guys! got a question here...
13:24 madnexus_ I have a 2 nodes replica cluster here using RDMA as transport
13:24 madnexus_ is adding a dummy node to the equation the only way to start one of the nodes if the other one is down?
13:25 jiffin1 joined #gluster
13:26 skoduri joined #gluster
13:29 shyam joined #gluster
13:31 RayTrace_ joined #gluster
13:32 Hesulan joined #gluster
13:35 Rasathus joined #gluster
13:35 Hesulan joined #gluster
13:47 jhyland joined #gluster
13:49 scoban joined #gluster
13:50 ovaistariq joined #gluster
13:52 TvL2386 joined #gluster
13:55 kotreshhr left #gluster
13:55 nbalacha joined #gluster
13:55 skylar joined #gluster
13:56 nehar joined #gluster
14:01 goretoxo joined #gluster
14:01 7GHAAM9WQ joined #gluster
14:02 plarsen joined #gluster
14:05 mhulsman joined #gluster
14:22 camg joined #gluster
14:30 nbalacha joined #gluster
14:35 theron joined #gluster
14:49 coredump joined #gluster
14:56 papamoose joined #gluster
15:01 haomaiwang joined #gluster
15:03 hamiller joined #gluster
15:04 camg joined #gluster
15:07 Gnomethrower joined #gluster
15:08 DV joined #gluster
15:08 shubhendu joined #gluster
15:10 kpease joined #gluster
15:11 fattaneh1 joined #gluster
15:11 fattaneh1 left #gluster
15:11 kpease joined #gluster
15:14 RameshN joined #gluster
15:16 atalur joined #gluster
15:20 arcolife joined #gluster
15:21 plarsen joined #gluster
15:21 spalai joined #gluster
15:21 spalai left #gluster
15:24 Rasathus_ joined #gluster
15:26 coredump joined #gluster
15:31 vmallika joined #gluster
15:37 jiffin joined #gluster
15:37 madnexus_ joined #gluster
15:39 Gaurav_ joined #gluster
15:41 nathwill joined #gluster
15:42 dlambrig_ joined #gluster
15:44 karnan joined #gluster
15:48 foster joined #gluster
15:55 nbalacha joined #gluster
15:57 theron joined #gluster
16:01 haomaiwa_ joined #gluster
16:02 RameshN joined #gluster
16:07 DV joined #gluster
16:12 kshlm joined #gluster
16:13 coredump joined #gluster
16:23 calavera joined #gluster
16:24 baoboa joined #gluster
16:32 nishanth joined #gluster
16:39 squizzi_ joined #gluster
16:48 rafi joined #gluster
16:52 jiffin joined #gluster
17:01 haomaiwa_ joined #gluster
17:03 skylar joined #gluster
17:05 RayTrace_ joined #gluster
17:08 DV joined #gluster
17:15 skylar joined #gluster
17:19 fattaneh1 joined #gluster
17:27 RayTrace_ joined #gluster
17:27 bowhunter joined #gluster
17:28 d0nn1e joined #gluster
17:44 skylar joined #gluster
17:45 fattaneh1 joined #gluster
17:57 skylar joined #gluster
17:59 MrAbaddon joined #gluster
18:01 haomaiwa_ joined #gluster
18:02 scoban joined #gluster
18:05 fattaneh joined #gluster
18:07 fattaneh left #gluster
18:08 mhulsman joined #gluster
18:19 jiffin joined #gluster
18:47 kanagaraj joined #gluster
18:48 deniszh joined #gluster
18:48 theron joined #gluster
18:54 deniszh joined #gluster
18:57 Rasathus joined #gluster
18:57 kkeithley joined #gluster
19:01 haomaiwa_ joined #gluster
19:04 ahino joined #gluster
19:04 coredump joined #gluster
19:07 coredump joined #gluster
19:08 ghenry joined #gluster
19:31 plarsen joined #gluster
19:36 calavera joined #gluster
19:41 sadbox joined #gluster
20:01 haomaiwa_ joined #gluster
20:04 sadbox joined #gluster
20:12 DV joined #gluster
20:14 sadbox joined #gluster
20:19 rideh joined #gluster
20:23 sadbox joined #gluster
20:30 Rasathus joined #gluster
20:33 mhulsman joined #gluster
20:40 techsenshi joined #gluster
20:47 scoban joined #gluster
21:01 haomaiwang joined #gluster
21:10 julim joined #gluster
21:11 theron joined #gluster
21:16 theron joined #gluster
21:56 klfwip joined #gluster
22:01 haomaiwa_ joined #gluster
22:06 abyss^ joined #gluster
22:07 DV joined #gluster
22:12 calavera joined #gluster
22:14 ahino joined #gluster
22:38 shyam joined #gluster
22:43 ronrib joined #gluster
22:59 calavera joined #gluster
23:01 DV joined #gluster
23:01 haomaiwang joined #gluster
23:06 nhayashi joined #gluster
23:35 Wizek joined #gluster
23:52 social joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary