Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 frankS2 joined #gluster
00:08 md2k joined #gluster
00:08 Iouns joined #gluster
00:08 Bardack joined #gluster
00:08 jbrooks joined #gluster
00:08 cabillman joined #gluster
00:08 pdrakeweb joined #gluster
00:08 JamesG joined #gluster
00:08 msciciel joined #gluster
00:08 cyberbootje joined #gluster
00:08 plarsen joined #gluster
00:08 csim joined #gluster
00:08 suliba joined #gluster
00:08 semajnz joined #gluster
00:08 mrEriksson joined #gluster
00:08 eryc joined #gluster
00:08 wonko joined #gluster
00:08 frakt joined #gluster
00:08 bitpushr joined #gluster
00:08 tg2 joined #gluster
00:08 coreping joined #gluster
00:08 ir8 joined #gluster
00:08 necrogami joined #gluster
00:08 Dave joined #gluster
00:08 afics joined #gluster
00:08 xMopxShell joined #gluster
00:08 samsaffron___ joined #gluster
00:08 Leildin joined #gluster
00:08 javi404 joined #gluster
00:08 swebb joined #gluster
00:08 Ru57y joined #gluster
00:08 papamoose joined #gluster
00:08 mjrosenb joined #gluster
00:08 XpineX joined #gluster
00:08 dlambrig joined #gluster
00:08 edong23 joined #gluster
00:08 tessier_ joined #gluster
00:08 devilspgd joined #gluster
00:08 scuttlemonkey joined #gluster
00:08 stopbyte joined #gluster
00:08 Mesh23 joined #gluster
00:08 m0zes joined #gluster
00:08 the-me joined #gluster
00:08 ndk joined #gluster
00:08 tty00 joined #gluster
00:08 monotek1 joined #gluster
00:08 jatb joined #gluster
00:08 Intensity joined #gluster
00:08 fubada joined #gluster
00:08 najib joined #gluster
00:08 beeradb_ joined #gluster
00:08 dgandhi joined #gluster
00:08 rp_ joined #gluster
00:08 rmgroth joined #gluster
00:08 kenansulayman joined #gluster
00:08 yosafbridge joined #gluster
00:08 Rapture_ joined #gluster
00:08 shortdudey123 joined #gluster
00:08 uebera|| joined #gluster
00:08 capri joined #gluster
00:08 Jeroenpc joined #gluster
00:08 Gugge joined #gluster
00:08 dblack joined #gluster
00:08 abyss^ joined #gluster
00:08 tru_tru joined #gluster
00:08 crashmag joined #gluster
00:08 k-ma joined #gluster
00:08 ccha joined #gluster
00:08 xavih joined #gluster
00:08 kalzz joined #gluster
00:08 adamaN joined #gluster
00:08 klaas joined #gluster
00:08 johndescs joined #gluster
00:08 muneerse joined #gluster
00:08 wolsen joined #gluster
00:08 Chr1st1an joined #gluster
00:08 Akee joined #gluster
00:08 mufa joined #gluster
00:08 R0ok_ joined #gluster
00:08 zoldar joined #gluster
00:08 sadbox joined #gluster
00:08 JPaul joined #gluster
00:08 _fortis joined #gluster
00:08 marcoceppi joined #gluster
00:08 Nebraskka joined #gluster
00:08 __NiC joined #gluster
00:08 glusterbot joined #gluster
00:08 ron-slc joined #gluster
00:08 SarsTW joined #gluster
00:09 stickyboy joined #gluster
00:09 delhage joined #gluster
00:09 clutchk joined #gluster
00:09 nhayashi joined #gluster
00:09 atrius` joined #gluster
00:09 RedW joined #gluster
00:09 ChrisNBlum joined #gluster
00:09 JoeJulian joined #gluster
00:09 yoavz joined #gluster
00:09 sjohnsen joined #gluster
00:10 frankS2 joined #gluster
00:13 gildub joined #gluster
00:13 David_Varghese joined #gluster
00:13 lalatenduM joined #gluster
00:13 semiosis joined #gluster
00:13 badone_ joined #gluster
00:13 malevolent joined #gluster
00:13 jamesc joined #gluster
00:13 rastar_afk joined #gluster
00:13 l0uis joined #gluster
00:13 Trivium joined #gluster
00:13 billputer joined #gluster
00:13 doekia joined #gluster
00:13 patryck joined #gluster
00:13 Sjors joined #gluster
00:13 tdasilva joined #gluster
00:13 partner joined #gluster
00:13 Arrfab joined #gluster
00:13 samikshan joined #gluster
00:13 and` joined #gluster
00:13 lpabon joined #gluster
00:13 Ludo- joined #gluster
00:13 amye-away joined #gluster
00:13 msvbhat_ joined #gluster
00:13 maveric_amitc_ joined #gluster
00:13 unicky joined #gluster
00:13 bennyturns joined #gluster
00:13 scubacuda joined #gluster
00:13 akik joined #gluster
00:13 timotheus1 joined #gluster
00:13 ndevos joined #gluster
00:13 ws2k3 joined #gluster
00:13 leucos joined #gluster
00:13 mrrrgn joined #gluster
00:13 obnox joined #gluster
00:13 ccoffey joined #gluster
00:13 p8952 joined #gluster
00:13 saltsa joined #gluster
00:13 DJClean joined #gluster
00:13 bivak_ joined #gluster
00:13 lezo joined #gluster
00:13 Ramereth joined #gluster
00:13 Kins joined #gluster
00:13 martinetd joined #gluster
00:13 paratai joined #gluster
00:14 deni joined #gluster
00:14 dastar joined #gluster
00:14 nzero joined #gluster
00:14 side_control joined #gluster
00:14 yangfeng joined #gluster
00:14 prg3 joined #gluster
00:14 nixpanic joined #gluster
00:14 sac joined #gluster
00:14 cristian joined #gluster
00:14 d-fence joined #gluster
00:14 drue joined #gluster
00:14 squeakyneb joined #gluster
00:14 mlhess joined #gluster
00:14 milkyline joined #gluster
00:14 mkzero joined #gluster
00:14 eljrax joined #gluster
00:14 PaulCuzner joined #gluster
00:14 primusinterpares joined #gluster
00:14 neuron joined #gluster
00:14 telmich joined #gluster
00:14 sloop joined #gluster
00:14 Champi joined #gluster
00:14 klaxa joined #gluster
00:14 nage joined #gluster
00:14 shruti joined #gluster
00:14 bfoster joined #gluster
00:14 zerick joined #gluster
00:14 lbarfield joined #gluster
00:14 NuxRo joined #gluster
00:14 siel joined #gluster
00:14 Larsen_ joined #gluster
00:14 johnmark joined #gluster
00:14 samppah joined #gluster
00:14 PaulePanter joined #gluster
00:14 Pintomatic joined #gluster
00:14 mmckeen joined #gluster
00:14 ackjewt joined #gluster
00:15 Gugge joined #gluster
00:16 PaulCuzner left #gluster
00:16 hagarth joined #gluster
00:16 rideh joined #gluster
00:16 EinstCrazy joined #gluster
00:16 harish_ joined #gluster
00:16 [7] joined #gluster
00:16 [o__o] joined #gluster
00:16 rehunted joined #gluster
00:16 mbukatov joined #gluster
00:16 sc0 joined #gluster
00:16 twisted` joined #gluster
00:16 night joined #gluster
00:16 kenansulayman joined #gluster
00:18 CP|AFK joined #gluster
00:18 Dasiel joined #gluster
00:18 masterzen joined #gluster
00:20 juhaj joined #gluster
00:20 cvstealth joined #gluster
00:20 Rydekull joined #gluster
00:20 DV__ joined #gluster
00:20 dlambrig joined #gluster
00:20 chirino joined #gluster
00:20 vincent_vdk joined #gluster
00:20 jotun_ joined #gluster
00:20 morse_ joined #gluster
00:20 squaly joined #gluster
00:20 portante joined #gluster
00:20 fsimonce joined #gluster
00:20 Peppard joined #gluster
00:20 sage joined #gluster
00:20 csaba joined #gluster
00:20 janegil joined #gluster
00:20 foster joined #gluster
00:20 jvandewege_ joined #gluster
00:20 edualbus joined #gluster
00:20 fyxim joined #gluster
00:20 VeggieMeat joined #gluster
00:20 jockek joined #gluster
00:20 anoopcs joined #gluster
00:20 campee joined #gluster
00:20 frostyfrog joined #gluster
00:20 rich0dify joined #gluster
00:20 JonathanD joined #gluster
00:20 purpleidea joined #gluster
00:20 lanning joined #gluster
00:20 sankarshan_away joined #gluster
00:20 jermudgeon joined #gluster
00:20 virusuy joined #gluster
00:20 yawkat joined #gluster
00:20 al joined #gluster
00:20 codex joined #gluster
00:25 Gugge joined #gluster
00:31 stickyboy joined #gluster
00:31 lkoranda joined #gluster
00:31 cliluw joined #gluster
00:31 armyriad joined #gluster
00:31 marlinc joined #gluster
00:49 nangthang joined #gluster
00:50 vimal joined #gluster
00:58 EinstCrazy joined #gluster
00:58 nangthang joined #gluster
01:03 shyam joined #gluster
01:13 dlambrig joined #gluster
01:15 dlambrig joined #gluster
01:17 haomaiwang joined #gluster
01:31 vimal joined #gluster
01:34 Lee1092 joined #gluster
02:00 billputer joined #gluster
02:04 Telsin joined #gluster
02:08 fyxim joined #gluster
02:12 B21956 joined #gluster
02:15 edwardm61 joined #gluster
02:20 harish_ joined #gluster
02:24 David-Varghese joined #gluster
02:42 shyam left #gluster
02:50 bharata-rao joined #gluster
02:50 PaulCuzner joined #gluster
02:58 zhangjn joined #gluster
03:02 maveric_amitc_ joined #gluster
03:05 kotreshhr joined #gluster
03:05 kotreshhr left #gluster
03:11 hagarth neuron: :O
03:12 DV joined #gluster
03:24 nishanth joined #gluster
03:28 TheSeven joined #gluster
03:34 haomaiwa_ joined #gluster
03:39 shubhendu_ joined #gluster
03:39 stickyboy joined #gluster
03:52 jotun joined #gluster
03:52 al joined #gluster
03:52 DV joined #gluster
03:53 yawkat joined #gluster
03:53 rich0dify joined #gluster
04:01 17SADTU2J joined #gluster
04:01 neha_ joined #gluster
04:02 itisravi joined #gluster
04:05 kshlm joined #gluster
04:06 ppai joined #gluster
04:09 hgowtham joined #gluster
04:11 RameshN joined #gluster
04:12 dmnchild joined #gluster
04:15 nbalacha joined #gluster
04:22 nbalacha joined #gluster
04:22 gem joined #gluster
04:22 haomaiwa_ joined #gluster
04:28 sakshi joined #gluster
04:41 najib joined #gluster
04:42 jiffin joined #gluster
04:43 kdhananjay joined #gluster
04:48 kotreshhr joined #gluster
04:55 vmallika joined #gluster
04:57 ramteid joined #gluster
05:02 RameshN joined #gluster
05:18 ndarshan joined #gluster
05:20 dmnchild joined #gluster
05:20 Bhaskarakiran joined #gluster
05:24 maveric_amitc_ joined #gluster
05:25 hagarth joined #gluster
05:26 ashiq joined #gluster
05:26 Manikandan joined #gluster
05:27 haomaiwa_ joined #gluster
05:28 atalur joined #gluster
05:29 dusmant joined #gluster
05:41 aravindavk joined #gluster
05:43 m0zes joined #gluster
05:46 karnan joined #gluster
05:48 haomaiwang joined #gluster
05:49 rafi joined #gluster
05:53 deepakcs joined #gluster
06:00 ramky joined #gluster
06:08 RayTrace_ joined #gluster
06:09 haomaiwang joined #gluster
06:10 RayTrace_ joined #gluster
06:17 zhangjn joined #gluster
06:18 jtux joined #gluster
06:23 mhulsman joined #gluster
06:26 hchiramm joined #gluster
06:28 Humble joined #gluster
06:31 skoduri joined #gluster
06:37 shubhendu_ joined #gluster
06:37 kshlm joined #gluster
06:38 EinstCrazy joined #gluster
06:38 haomaiwang joined #gluster
06:44 vmallika joined #gluster
06:48 spalai joined #gluster
06:48 Philambdo joined #gluster
06:52 haomaiwang joined #gluster
06:56 zhangjn joined #gluster
06:57 hgowtham joined #gluster
06:59 nangthang joined #gluster
07:04 EinstCra_ joined #gluster
07:05 EinstCrazy joined #gluster
07:06 EinstCr__ joined #gluster
07:06 haomaiwa_ joined #gluster
07:09 gem joined #gluster
07:10 LebedevRI joined #gluster
07:12 [Enrico] joined #gluster
07:22 ghenry joined #gluster
07:22 ghenry joined #gluster
07:25 kshlm joined #gluster
07:27 maveric_amitc_ joined #gluster
07:34 ivan_rossi joined #gluster
07:46 EinstCrazy joined #gluster
07:53 haomaiwa_ joined #gluster
07:54 social joined #gluster
07:56 haomaiw__ joined #gluster
07:58 Norky joined #gluster
08:00 jwd joined #gluster
08:11 deniszh joined #gluster
08:18 poornimag joined #gluster
08:26 ctria joined #gluster
08:31 Philambdo joined #gluster
08:43 monotek joined #gluster
08:43 Slashman joined #gluster
08:56 RameshN joined #gluster
08:56 kalzz joined #gluster
08:58 shubhendu_ joined #gluster
09:03 zhangjn joined #gluster
09:05 zhangjn joined #gluster
09:06 zhangjn joined #gluster
09:07 zhangjn joined #gluster
09:08 zhangjn joined #gluster
09:11 zhangjn joined #gluster
09:15 Bhaskarakiran joined #gluster
09:17 Saravana_ joined #gluster
09:18 hagarth joined #gluster
09:19 cppking joined #gluster
09:19 arcolife joined #gluster
09:21 cppking hello guys , I'm using gluster 3.6.6 , already created a volume , but I can't mount the volume in another machine with same gluster version client , how to debug
09:22 cppking no firewall and selinux
09:22 jiffin1 joined #gluster
09:24 hagarth cppking: have you looked into the client log file?
09:26 mhulsman1 joined #gluster
09:28 cppking hagarth: fuc**, I found that my local mount point has already mounted a NFS volume, so I can't mount glusterfs
09:28 cppking it takes me about 1 hour, shi**
09:28 cppking somebody kil* me
09:38 stickyboy joined #gluster
09:39 Philambdo joined #gluster
09:50 Philambdo joined #gluster
09:52 bluenemo joined #gluster
09:55 mhulsman joined #gluster
09:59 sripathi joined #gluster
10:02 nbalacha joined #gluster
10:20 jiffin joined #gluster
10:25 harish joined #gluster
10:26 mhulsman joined #gluster
10:36 Philambdo1 joined #gluster
10:41 Philambdo1 joined #gluster
10:45 Philambdo1 joined #gluster
10:46 anonymus joined #gluster
10:46 anonymus hi
10:46 glusterbot anonymus: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:47 anonymus sorry bot, I would like to say 'hi' only
10:49 hagarth anonymus: hi there :)
10:54 jiffin1 joined #gluster
10:54 hgowtham joined #gluster
11:23 deepakcs dmnchild: ping, were u the one asking yesterday abt the ssl.cipher-list failing due to absence of ' ' ?
11:25 deepakcs dmnchild: guess you were (just did scrollback to confirm). The docs update patch I jsut sent: https://github.com/gluster/glusterdocs/pull/56
11:25 glusterbot Title: Update SSL.md by dpkshetty · Pull Request #56 · gluster/glusterdocs · GitHub (at github.com)
11:25 deepakcs dmnchild: with that it should be visible in the docs too :)
11:26 kotreshhr left #gluster
11:27 dmnchild I actually tried with the single quotes too on debian jessie, and only the /! worked. neither the " or ' helped.
11:32 julim joined #gluster
11:33 maveric_amitc_ joined #gluster
11:39 ira joined #gluster
11:55 EinstCrazy joined #gluster
11:59 neha__ joined #gluster
12:14 plarsen joined #gluster
12:18 poornimag joined #gluster
12:26 mhulsman1 joined #gluster
12:30 ppai joined #gluster
12:43 unclemarc joined #gluster
12:49 B21956 joined #gluster
12:51 hagarth joined #gluster
13:10 haomaiwa_ joined #gluster
13:12 hamiller joined #gluster
13:12 ZuLu[UM0215] joined #gluster
13:17 ayma joined #gluster
13:19 ZuLu[UM0215] Hi there. New to gluster, btw. is there a way to have a web console for a version installed from repositories in Centos 6?
13:21 lpabon joined #gluster
13:22 a_ta joined #gluster
13:22 Akee joined #gluster
13:24 bennyturns joined #gluster
13:27 mhulsman joined #gluster
13:28 kotreshhr joined #gluster
13:29 maserati joined #gluster
13:33 dgandhi joined #gluster
13:34 kotreshhr joined #gluster
13:34 shaunm joined #gluster
13:40 mpietersen joined #gluster
13:49 rwheeler joined #gluster
13:53 ZuLu[UM0215] @glusterbot is there anybody out there?
13:53 ZuLu[UM0215] :)
13:54 maveric_amitc_ joined #gluster
13:57 a_ta joined #gluster
13:59 rjoseph joined #gluster
14:01 spalai left #gluster
14:07 ppai joined #gluster
14:12 the-me joined #gluster
14:14 skoduri joined #gluster
14:17 Leildin what do you mean by webconsole ?
14:19 Leildin I've never used the management console, didn't know there is one. you can do everything you need in command line form though, might give you a better grasp of what's happening when you do stuff
14:25 skylar joined #gluster
14:25 mlanz joined #gluster
14:26 mlanz joined #gluster
14:26 mlhamburg joined #gluster
14:26 haomaiwa_ joined #gluster
14:27 mhulsman joined #gluster
14:28 jiffin ping kkeithley , i need clarify certain doubts regarding FOSDEM
14:29 ZuLu[UM0215] i know, currently I'm working in command line and it's ok.
14:29 ZuLu[UM0215] https://gluster.org/community/documentation/index.php/Gluster_3.1:_Exploring_the_Gluster_Management_Console
14:29 a_ta left #gluster
14:29 ZuLu[UM0215] but I see that there is/was some sort of web access and it will be ok for a junior admin
14:31 pseudonymous joined #gluster
14:32 pseudonymous So I have a volume whose constituent components all changed their IP's. What's the easiest way I can get back up and running ? (I'm willing to nuke metadata if I can simply create a new volume with the old data and continue from there
14:34 mlhamburg Hi, I'm testing a gluster setup with two nodes and i hard-resetted one node to test reconnect and self healing. After the node came back up again, the other node tries to reconnect but logs '0-socket: invalid argument: this->private [Invalid argument]' Any hint where to look? http://pastebin.com/u0HCVeAr
14:34 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:35 ndevos ZuLu[UM0215]: the oVirt project offers a web console for Gluster, you could have a look at that
14:35 mlhamburg @paste
14:35 glusterbot mlhamburg: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
14:40 mlhamburg Hi, I'm testing a gluster setup with two nodes and i hard-resetted one node to test reconnect and self healing. After the node came back up again, the other node tries to reconnect but logs '0-socket: invalid argument: this->private [Invalid argument]' Any hint where to look? http://fpaste.org/279981/44500637/
14:40 glusterbot Title: #279981 Fedora Project Pastebin (at fpaste.org)
14:44 Trivium joined #gluster
14:51 haomaiwang joined #gluster
14:52 ZuLu[UM0215] ndevos: thanks for the tip, I'll look into it
14:57 haomaiwa_ joined #gluster
14:59 jobewan joined #gluster
14:59 calavera joined #gluster
15:03 JoeJulian mlhamburg: All that really tells us is that this->private was null and failed the assert, "GF_VALIDATE_OR_GOTO ("socket", this->private, err);"
15:03 shyam joined #gluster
15:03 JoeJulian Does it fail to reconnect after that, or is it a transitory error?
15:03 haomaiwa_ joined #gluster
15:04 mlhamburg JoeJulian: healing is not started. if i kill the glusterfs process that has "--volfile-id gluster/glustershd" and restart it, healing starts and everything works as expected
15:05 wushudoin joined #gluster
15:05 mlhamburg JoeJulian: I also tcpdumped all TCP SYN packets, no outgoing packets
15:06 JoeJulian mlhamburg: Please file a bug report.
15:06 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
15:06 JoeJulian In the mean time, let's see if I can figure out what changed and if there's a workaround.
15:07 mlhamburg JoeJulian: thx. it also happened in 3.6
15:07 JoeJulian orly
15:08 JoeJulian 3.6.?
15:08 squizzi_ joined #gluster
15:09 mlhamburg JoeJulian: I used glusterfs-3.6 ubuntu repository, so I guess it was 3.6.6
15:11 mlhamburg JoeJulian: Distributed-Replicate volume, 2x2 bricks, two servers
15:13 laxdog joined #gluster
15:13 poornimag joined #gluster
15:13 squizzi_ joined #gluster
15:17 jobewan joined #gluster
15:17 laxdog Hey, I did an upgrade from 3.5 to 3.7, but now I can't use one of my volumes. I'm getting "Quota operation doesn't have a task_id" and "Failed to add task details to dict" when I try to run status on them.
15:17 laxdog It says it's started, but I can't run status and there's no replication.
15:18 laxdog Could anyone help?
15:22 hagarth laxdog: are you able to mount the volume?
15:22 JoeJulian mlhamburg: There's not been any changes to any of the code in that, or any of the calling functions in several versions. I'm at a loss.
15:22 laxdog I can mount the volume, yea.
15:23 JoeJulian hagarth: Did you see mlhamburg's issue? http://fpaste.org/279981/44500637/ prevents shd from reconnecting after a ping-timeout.
15:23 glusterbot Title: #279981 Fedora Project Pastebin (at fpaste.org)
15:23 laxdog I *think* that one of the hosts may have been on 3.6 for part of the procedure. I'm not sure, I'm coming in half way through this problem.
15:25 JoeJulian laxdog: http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.7#Upgrade_Steps_For_Quota perhaps?
15:26 kdhananjay joined #gluster
15:27 hagarth JoeJulian: checking
15:30 laxdog JoeJulian: That's interesting. None of those steps were done it seems.
15:30 laxdog Which certainly could explain that error.
15:31 JoeJulian That was my thought.
15:31 JoeJulian I'm not sure if it can be done after-the-fact.
15:32 hagarth JoeJulian, mlhamburg: sounds very odd. looks like the socket's private structure has become invalid.
15:33 JoeJulian I can't duplicate it here.
15:33 laxdog I checked the dirs it's meant to write to and they're empty. I'm not sure if I can force it myself or not.
15:33 laxdog I'm starting to think it would be best to pull all the data off and rebuild them from scratch.
15:34 JoeJulian I'm pretty sure you could just remove quota, wait for it to clear the attributes, then re-add it. Not sure if that's better for you.
15:36 cholcombe joined #gluster
15:40 stickyboy joined #gluster
15:46 rafi joined #gluster
15:46 mlhamburg JoeJulian: I hard-resetted one server, after that the other shows the error every 5secs in glustershd.log, gv0.log and nfs.log. I then added some files to the volume and brought back the server after 5mins, error still occurs.
15:46 armyriad joined #gluster
15:48 mlhamburg JoeJulian: Could it be a configuration error? Everything else works as expected and after a restart it also heals automatically. Another question: How can I stop/restart all gluster processes? On Ubuntu "service glusterfs-server stop" does not stop all processes. Am I expected to kill them?
15:48 hagarth mlhamburg: can you please post complete log of glustershd or nfs on fpaste?
15:48 laxdog JoeJulian: That would be much better if it works, I'll give it a go.
15:48 armyriad joined #gluster
15:51 laxdog Apparently there are no quotas, so maybe that error isn't the real issue.
15:51 laxdog I think I just need to cut my losses and migrate.
15:51 CyrilPeponnet Hey guys, is it 100% sure that if gluster vol heal bla info return 0 for each brick that all of my bricks are in sync for a replicated volume ?
15:53 hagarth CyrilPeponnet: that is an indication of the index used by glustershd not having any entries
15:53 JoeJulian The only thing 100% is to stop the volume and do a hash comparison.
15:54 JoeJulian Even then you're only like 9 nines sure because cosmic radiation.
15:54 CyrilPeponnet so let say I had one node to star with a volume full of data. I added 2 other nodes and add bricks to make this vol replica 3
15:54 CyrilPeponnet when can I unplug the first node ?
15:55 CyrilPeponnet (with 10TB of data comparing hash will take like forever)
15:55 hagarth CyrilPeponnet: I would run gluster volume heal full, let that complete successfully and then unplug the first one
15:56 JoeJulian I agree. It's even worse with 5 PB.
15:56 CyrilPeponnet ok to be sure I will redo the heal full again :p
15:56 CyrilPeponnet then when heal info give me 0
15:56 JoeJulian If you did a heal full, and it's completed, I'd feel pretty confident.
15:56 CyrilPeponnet I suppose I'm good to go so I can remove the brick then run and hide
15:57 CyrilPeponnet (just in case)
15:57 JoeJulian Remove the brick. It's not like it gets erased or anything.
15:57 JoeJulian Archive it.
15:57 CyrilPeponnet One awesome feature would be have the synced state in gluster vol info ;)
15:57 CyrilPeponnet yes sure
15:58 CyrilPeponnet thx guys, as usual you are the bests :)
15:58 hagarth CyrilPeponnet: also check glustershd.log on all nodes before unplugging
15:58 JoeJulian Check for Errors?
15:58 CyrilPeponnet @hagarth for what ?
15:59 hagarth CyrilPeponnet: check for errors .. yes
15:59 CyrilPeponnet ok thx guys
15:59 jiffin joined #gluster
16:00 JoeJulian Your monitoring system should always be checking for > warnings anyway, imho.
16:01 haomaiwa_ joined #gluster
16:02 Philambdo joined #gluster
16:02 hagarth JoeJulian: I have been considering a more native alerting system to be built into gluster .. something that emits events of interest which could be consumed by listeners
16:02 shyam joined #gluster
16:03 CyrilPeponnet I have  bunch of warning every days
16:03 JoeJulian hagarth: dbus?
16:04 hagarth JoeJulian: yes, that's what we have been looking into
16:04 mlhamburg JoeJulian: I will prepare a fresh install and testcase on monday and then file a bug report. Current log files are very big cause of trace level and hours of error condition. Can you answer my question about restarting the processes? That would help with a clean report.
16:04 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
16:04 JoeJulian I know there's a lot of people that aren't fans of dbus, but I'm kind of digging it.
16:07 hagarth yeah, basically we would need operators to be alerted as soon as something inappropriate is sensed. will try figuring out what other interfaces might be of interest.
16:07 JoeJulian mlhamburg: Sorry, missed the question. I'm working, too. :D - Could be configuration, I suppose. To stop all gluster processes, pkill -f gluster. The gluster-server stop only stops glusterd (and should stop shd and nfs) by design.
16:09 JoeJulian hagarth: self-heal started and completed
16:09 JoeJulian Well, self-heal state.
16:10 JoeJulian I would like to be able to have a dbus state that I can monitor that would indicate if a self-heal is in process.
16:12 mlhamburg JoeJulian: thanks! In Europe it's friday evening now so for me it's time to go home now :) Thank you for your help, perhaps we'll meet here on monday.
16:13 hagarth JoeJulian: right, that would be one
16:14 a_ta joined #gluster
16:14 nbalacha joined #gluster
16:14 David_Varghese joined #gluster
16:14 calavera joined #gluster
16:15 vmallika joined #gluster
16:20 shubhendu_ joined #gluster
16:28 CyrilPeponnet @hagarth geo-replication is not working well for we.. it only transfert some files and say it's done
16:29 CyrilPeponnet stuck at 99 files synced
16:29 kotreshhr left #gluster
16:29 CyrilPeponnet I try to touch every file from my master volume but they are not transfered
16:30 hagarth CyrilPeponnet: 3.6 right?
16:30 CyrilPeponnet 3.6.5
16:30 CyrilPeponnet yep
16:30 ivan_rossi left #gluster
16:31 hagarth CyrilPeponnet: 3.7.x has quite a few improvements for geo-replication. I will check with the geo-replication developers if we can backport some of those to 3.6.
16:31 CyrilPeponnet so for now I'd better rsync my data ?
16:31 hagarth CyrilPeponnet: in any case, would it be possible to file a bug? it would be good to root cause if it is a known fixed issue or a new one.
16:31 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
16:31 hagarth CyrilPeponnet: any possibility of upgrading to 3.7?
16:32 jiffin joined #gluster
16:32 CyrilPeponnet not really we just upgrade everything from 3.6 to 3.6.5
16:32 CyrilPeponnet we have more than 1k clients using fuse 3.6
16:32 hagarth CyrilPeponnet: anything in the geo-replication logs that indicate failures?
16:32 CyrilPeponnet I'm looking
16:32 hagarth CyrilPeponnet: I would love to talk to you about your deployment some day!
16:32 armyriad joined #gluster
16:33 CyrilPeponnet Sure !
16:33 CyrilPeponnet source(/export/raid/videos):221:errlog] Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-KILwkz/056d87bdeecf02f82b096c9942ef65a4.sock root@andcgluster01.be.alcatel-lucent.com /nonexistent/gsyncd --session-owner 8731ff68-474a-4076-bb2c-d93608463057 -N --listen --timeout 120 gluster://localhost:videos" returned with 255,
16:33 CyrilPeponnet saying
16:33 hagarth returned with 255 .. does look like -1
16:34 CyrilPeponnet https://gist.github.com/CyrilPeponnet/df40e882665ff4f9decd
16:34 glusterbot Title: gist:df40e882665ff4f9decd · GitHub (at gist.github.com)
16:34 hagarth CyrilPeponnet: anything on the slaves?
16:35 CyrilPeponnet looking
16:36 hagarth CyrilPeponnet: Hi there, would cyril.peponnet@alcatel-lucent.com be your email address? I will start a thread including our geo-replication developers so that we can address this problem.
16:38 dblack joined #gluster
16:43 Rapture joined #gluster
16:45 deniszh1 joined #gluster
16:46 Lee1092 joined #gluster
16:48 JoeJulian hagarth++
16:48 glusterbot JoeJulian: hagarth's karma is now 5
16:48 jiffin1 joined #gluster
16:49 CyrilPeponnet sure
16:49 JoeJulian I like that approach. The "ask on gluster-users" instruction has always bothered me for some reason.
16:50 CyrilPeponnet @hagarth nothing on slave logs
16:54 Chr1st1an Hello
16:54 glusterbot Chr1st1an: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:55 Chr1st1an Anyone had issues where a corrupt config accross multiple nodes has happen after adding option rpc-auth-allow-insecure on in /etc/glusterfs/glusterd.vol even when glusterd is stopped
16:56 Chr1st1an And then to remove config inside /var/lib/glusterd /* and then peer probe against a working node
16:56 Chr1st1an Only to notice that you get 2 UUID per hosts and registerd as a new node
16:57 JoeJulian I haven't seen that specifically, but to cure the "peer rejected" do to a hash mismatch, you can just "gluster sync $other_server"
16:58 jiffin joined #gluster
16:58 JoeJulian To fix two uuids for the same server, you should just be able to stop glusterd and delete the errant uuid in /var/lib/glusterd/peers.
16:59 Chr1st1an Any way to figure out what UUID is errant ?
17:00 JoeJulian /var/lib/glusterd/glusterd.info
17:01 Chr1st1an Does the volume / brick files also use the UUID ?
17:02 Chr1st1an Just wondering if I have to edit a load of config , since what I did was to remove all files inside /var/lib/glusterd/* after backing it up
17:02 JoeJulian No
17:02 Chr1st1an and then I did a peer probe <workingnode>
17:03 JoeJulian That worked?
17:03 a_ta left #gluster
17:03 JoeJulian I would have expected it to reject the probe since it had the wrong uuid.
17:03 nathwill joined #gluster
17:03 haomaiwa_ joined #gluster
17:04 JoeJulian I mean, probit it *from* the working node, sure, but not probing a trusted pool from a non-trusted host.
17:04 Chr1st1an Somehow it worked
17:05 Chr1st1an I see now that the UUID inside the backup file is diffrent from the one currently running
17:06 Chr1st1an gluster pool list | grep n14
17:06 Chr1st1an 9e3a0503-*   14.node.net  Connected
17:06 Chr1st1an 6e144bdc-*   14.node.net  Connected
17:11 Chr1st1an Question is should i run a peer detach on the UUID not inside the /var/lib/glusterd/glusterd.info
17:11 Chr1st1an Just to clean it up
17:13 Chr1st1an Also a bit weird that it can be online on 2 UUID's at the same time
17:18 hagarth CyrilPeponnet: ok
17:19 JoeJulian Chr1st1an: I wouldn't bother.
17:21 Chr1st1an But having multiple peer inside the peer status list that is the same name but different UUID cause issues ?
17:21 Chr1st1an that came out wrong
17:21 JoeJulian stop glusterd. remove the uuid that no longer exists from /var/lib/glusterd/peers. Start glusterd.
17:22 Chr1st1an Thanks will try and do that on all :)
17:23 JoeJulian Do it on one and make sure it doesn't just come back.
17:28 dlambrig joined #gluster
17:32 shubhendu_ joined #gluster
17:35 a_ta joined #gluster
17:39 rafi joined #gluster
17:40 Chr1st1an It goes away on that one node where is done
17:40 Chr1st1an but still shows up on the others
17:42 dlambrig joined #gluster
17:42 sysconfig joined #gluster
17:48 beeradb_ joined #gluster
17:59 jiffin joined #gluster
18:01 haomaiwa_ joined #gluster
18:03 a_ta joined #gluster
18:11 theron joined #gluster
18:18 jiffin joined #gluster
18:32 F2Knight joined #gluster
18:45 papamoose joined #gluster
18:54 maveric_amitc_ joined #gluster
18:54 Philambdo joined #gluster
19:01 haomaiwa_ joined #gluster
19:01 jiffin joined #gluster
19:03 cvstealth joined #gluster
19:10 deniszh joined #gluster
19:12 jbrooks joined #gluster
19:35 JoeJulian @op-version
19:35 glusterbot JoeJulian: The operating version represents the RPC and translator capabilities required to accommodate the volume settings ( http://gluster.org/community/documentation/index.php/OperatingVersions ). To allow older version clients to connect to newer servers, reset any volume options that require the newer op-version.
19:36 JoeJulian hmm
19:36 dlambrig joined #gluster
20:01 haomaiwa_ joined #gluster
20:08 stickyboy joined #gluster
20:11 DV joined #gluster
20:11 Rapture joined #gluster
20:15 stickyboy joined #gluster
20:17 dlambrig joined #gluster
20:33 stickyboy joined #gluster
20:33 cliluw joined #gluster
21:01 haomaiwa_ joined #gluster
21:01 a_ta joined #gluster
21:04 ramky joined #gluster
21:06 64MAD1F06 left #gluster
21:08 linagee joined #gluster
21:08 linagee is there a way to know that a glusterfs is in sync?
21:09 linagee (besides looking at network traffic, there has to be a better way than that....)
21:09 JoeJulian gluster volume heal $vol info
21:09 JoeJulian If it's empty, then any heals that might have been scheduled have been completed.
21:10 linagee Number of entries: 0
21:10 linagee I guess I am in sync at the very moment the command was run?
21:11 JoeJulian You're always in sync unless you're not.
21:12 joshin joined #gluster
21:12 joshin joined #gluster
21:12 linagee JoeJulian: does it seem like a really not-smart thing to do to have LXC .raw disk images being synced with gluster? (seems kind of scary shutting down and starting up the LXC container on another node, how do I know it was in sync?)
21:12 joshin left #gluster
21:12 linagee it seems to work though.....
21:12 JoeJulian It's always in sync.
21:12 JoeJulian The client writes to any bricks that are needed - synchronously.
21:13 JoeJulian The only reason it wouldn't be is if you had a server outage.
21:13 linagee I wish I knew why, lol. (and yes, I'm using the gluster mounted device, not accessing the ext4 volume directly, I found volume directly is a big no-no or something?)
21:13 JoeJulian @glossary
21:13 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
21:13 linagee er yes. I am using the bricks, not the native volume
21:14 JoeJulian And yes, using the brick avoids the software - thus you're no longer using glusterfs.
21:14 JoeJulian It's like writing to the middle of the disk using dd and expecting ext4 to know what to do with that.
21:14 linagee I am using the mounted glustefs device. is that not called the bricks? (maybe my terminology is still wrong)
21:14 linagee glusterfs device
21:15 linagee you can have one host doing glusterfsd hosting and have it mount its own device somewhere else on that same server so you can "properly consume it" (so things sync)
21:15 linagee (probably not best for performance though?)
21:16 JoeJulian When you mount the volume, you're using the volume.
21:16 linagee I have /etc/storage/glusterfs as my native ext4 device (natively a /dev/md0), then I have that mounted on /glusterfs so I can use it
21:16 linagee on the same server. does that seem weird?
21:17 JoeJulian Nope, pretty common.
21:17 linagee I can touch /glusterfs/testfile on one server, and it appears on /glusterfs/testfile on the second server
21:17 linagee (so I know glusterfs is working properly, yay! :) )
21:18 linagee but I just wanted to really know if its really bad to be throwing LXC images onto such a thing. :-D
21:18 linagee and the consistency if things aren't synced.
21:18 linagee (or, if glusterfs magically takes care of that for me?)
21:19 JoeJulian Consider it magic (but learn how it works).
21:19 JoeJulian For now you can have confidence trusting that it knows more than you do about how to do this. ;)
21:20 linagee well I turn off of the LXC on one node before I turn on in the other node. just wondering, if I did a lot of changes to the disk and didn't wait long enough for the nodes to sync... :-D
21:20 linagee or I guess I should always wait for the sync before turning off a node too...
21:20 JoeJulian As long as you're going through a mounted volume *it's always in sync*.
21:21 linagee interesting
21:21 linagee does that mean it blocks i/o until its in sync?
21:21 JoeJulian (unless you have a server outage, after which it heals itself but even so there's still no period of time during which it's unsafe).
21:22 linagee ack. I wish the gluster documentation would have cleared all of these questions up for me. ;) (so I wouldn't have to waste all of your time with what you probably think are newb questions, lol.)
21:22 JoeJulian The fuse client connects to ALL SERVERS simultaneously.
21:23 a_ta joined #gluster
21:23 linagee ah
21:23 JoeJulian I don't mind newb questions.
21:23 JoeJulian I was once one.
21:24 JoeJulian Sometimes it takes some deep meditation to let it all soak in. :)
21:24 linagee I have some dev familiarity with file handles and such. but I don't have a lot of background as to the internals of fuse. (other than at the sysadmin level.)
21:25 skylar joined #gluster
21:25 al joined #gluster
21:25 RedW joined #gluster
21:27 JoeJulian Gluster is a series of micro-kernels called translators. You can picture them as blocks with plugs up and down. Some translators just have one connector that goes each way. The translator does one something with the fd and passes it on.
21:27 linagee (and that dev knowledge is still at the application developer level, not the system/kernel developer level. ;) )
21:27 JoeJulian Some translators (the replication translator for instance) have two connectors on one side, and pass the fd from the top to both bottom.
21:28 linagee so it will block and wait until all of the glusterfsd have a copy of the thing to write?
21:28 JoeJulian The distribute translator has many connectors on the bottom and, depending on the hash match of the filename, pass the fd on to just one of them.
21:28 JoeJulian yes.
21:28 linagee ah. :-D that one yes clears up quite a bit. :-D
21:29 linagee blocking makes sense why it would "always" be consistent.
21:29 JoeJulian Gluster leans more CP out of CAP.
21:30 linagee (except when its not consistent, lol, maybe if you power something down before its had time to clear its write cache or something.)
21:30 JoeJulian No, it's pretty resilient even then.
21:30 linagee because the filesystem stuff takes care of that?
21:30 linagee (journaling and such?)
21:30 linagee (at least... I think that is what journaling is for, lol.)
21:31 linagee er, ignore the previous two bracketed () things
21:31 JoeJulian Beyond even that. If you're replicated and at least one server is healthy, the client knows to use that data. In the mean time, the self-heal daemon will block check the other server and fix it's copy.
21:32 JoeJulian Once you wrap your head around that part, then you can learn about split-brain and why you should have quorum.
21:32 linagee right now I have two nodes using ext4 for storage. if I throw a third system into the mix, does it behave like a three-way mirror, or does glusterfs have the smarts to maybe keep 2 copies of the data? (or is that configurable?)
21:33 JoeJulian @lucky expanding glusterfs by one server
21:33 glusterbot JoeJulian: https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
21:33 linagee thanks!
21:33 JoeJulian Or you can make it replica 3 if you choose.
21:33 linagee ah... that was the part of the initial volume creation. replica.
21:34 linagee so its on a per (brick? volume? not sure of the terminology) level?
21:35 JoeJulian replica 2 means that you'll have 2 bricks that are replica pairs (in the order added). If you had 4 bricks configured replica 2, you would have 2 pairs and your files would be distributed among them.
21:35 JoeJulian @brick order
21:35 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
21:35 linagee *brain explodes* :-D
21:36 JoeJulian hehe
21:36 linagee I mean I get it. I just didn't know there was that complexity there.
21:36 JoeJulian It's amazing how many ways there are to screw things up when you start clustering things together to make bigger things.
21:37 JoeJulian Luckily there are some brilliant programmers that make it all happen for us.
21:37 linagee like trying to run containers/VMs on your storage nodes? :-D
21:37 JoeJulian I've been running VMs on gluster for years.
21:37 linagee (I mean, there are great peak bandwidth reasons to do this, but also will cause spinny disks to get lots of iowait. ;) )
21:38 diegows joined #gluster
21:38 JoeJulian How about running RDBs on gluster and using the DHT translator as a way to shard your data for better performance?
21:38 linagee I've just started playing around with Proxmox (coming from openvz).
21:38 skylar joined #gluster
21:38 linagee (Proxmox is like a fancy frontend gui for LXC and QEMU)
21:39 JoeJulian Yeah, I'm familiar with it.
21:39 JoeJulian I'm not a huge fan.
21:39 linagee I was initially!
21:39 JoeJulian Mostly because of their choice of distro.
21:39 linagee then I found out the GlusterFS support stinks.
21:39 linagee no running VMs on Gluster volumes. (Unless you do like I did and call it a volume and that its shared.)
21:40 JoeJulian Partly because there's a lot of people that don't know anything about what they're using because it's all obscured by a pretty gui.
21:40 linagee and it does properly work by having a VM on a shared volume. just kind of silly why you have to not use native glusterfs support to do that....
21:40 shyam joined #gluster
21:41 linagee the pretty gui seemed like a nice idea for: now I can have less knowledgable people fail things over if I'm not available. Until I found out that containers can't be live migrated. (wtf?)
21:41 JoeJulian Yep.
21:41 linagee and Ubuntu has some sort of special LXD live migration thing for LXC. so proxmox feels a bit old for not also having that. :(
21:41 JoeJulian You're supposed to just tear them down and start up new ones.
21:42 linagee I can shut down a container, migrate it to another node, and start it up on another node just fine. (using the shared folder / glusterfs backing)
21:42 JoeJulian There's still a lot of controversy about migrating containers and how (or if) to do that.
21:43 linagee also, when I run it as a "real" VM instead of a container, I get near zero downtime "online migration". but its just all so much slower and resource hoggy as you'd expect.
21:44 linagee definitely cheaper than a VMWare "VMotion" license though
21:44 linagee "we want $,$$$ for this license! whee! we are VMware!"
21:45 klaxa joined #gluster
21:46 JoeJulian I run everything in vms. It's easier to manage, imho. I don't really see any problem with resources. kvm and memory ballooning work quite well at sharing resources.
21:47 linagee I've only got 8GB of memory in each of the nodes that I was given right now. I guess that's the one good thing about Proxmox. If they give me more resources in the future, I may move from containers to "real" VMs if it makes sense.
21:48 linagee JoeJulian: can you "live migrate" between physical hosts using kvm+glusterfs?
21:48 JoeJulian yep
21:48 linagee nice
21:48 JoeJulian libvirtd handles it all
21:49 linagee are you consuming glusterfs inside of the kvm itself too? (for whatever regular data) or somehow shared from the physical host? (no idea how that would work)
21:49 JoeJulian I go a lot more overkill than I (or most people) need, and I've got a full openstack installation at home.
21:49 JoeJulian At work, we deploy openstack as one of our private cloud offerings.
21:50 linagee isn't openstack a fancy frontend GUI? I thought you said you were against that. ;)
21:50 JoeJulian :D
21:50 linagee inconsistency! exterminate! exterminate!
21:50 JoeJulian I do consume gluster volumes from inside VMs that exist on gluster volumes.
21:50 JoeJulian It's very Incestuous.
21:51 linagee I've tried that with openvz and it isn't happy. (fuse issues, still going through the google'd solutions.)
21:51 linagee you need to mknod a fuse device and then allow it if you get permission denied.
21:51 JoeJulian Yeah, there's some way to make vz ok with fuse, but I've never done it.
21:52 pdrakeweb joined #gluster
21:54 linagee JoeJulian: love the blog btw. I saw it before I even arrived in this chan. ;)  (recognized the background.)
21:54 JoeJulian Thanks
21:55 JoeJulian I've got some new stuff I'm trying to find time to work on.
21:55 JoeJulian New gluster stuff and very new ceph stuff.
21:55 linagee apt-get install more-time      "Package not found."
21:56 linagee I think ceph is evil when you're trying to scrape by on few resources and do weird A->B->C things. ceph wants to access bare disks.
21:56 JoeJulian pacman -S more-time "target not found: more-time"
21:56 linagee (ceph doesn't like software raid)
21:56 JoeJulian But there's no need for software raid with ceph.
21:57 linagee unfortunately my root is on the same software raid. so I'd have to start all over and do some crazy thing.
21:57 linagee (probably stick in a SSD and put the root on that along with metadata for ceph that I've heard likes to be on SSD. but eh. no time for all of that.)
21:57 JoeJulian The latest version of ceph will allow you to use LVM devices as osds.
21:58 JoeJulian (I know because I added the patches to make it work)
21:58 linagee hahaha. I don't like that. "ceph prohibits it, so we had to *make it work*" :-D
21:58 linagee I mean, its cool that you did that, but it sucks that you had to fight that fight. ;)
21:58 JoeJulian It was actually more of a udev problem.
21:59 JoeJulian And a little bit of a ceph-disk bug
21:59 linagee can you rate ceph vs gluster vs DRBD? (I was all prepped and ready for DRBD, then I tried to sync and it was taking forever. So I found glusterfs, lol.)
22:00 linagee DRBD syncing 12TB = not fast.
22:00 linagee and it kept restarting when I was trying to tune its config
22:00 linagee I'm definitely liking the simplicity of gluster.
22:01 linagee (and I saw that there are config files if I wanted to later play with those.)
22:01 JoeJulian gluster > ceph > tin cans and string > drbd.
22:01 64MAD3GH7 joined #gluster
22:01 linagee LOLOLOL
22:01 shyam joined #gluster
22:02 a_ta joined #gluster
22:03 linagee gluster > ceph > tin cans and string > drbd > NFS > samba > FAT32 > anything microsoft
22:04 linagee (we do use NFS, it just makes me a bit crazy at times. Just after discovering glusterfs, now its like, I wonder if glusterfs would have had those same problems....)
22:04 linagee we have some sort of crazy hybrid of iSCSI and NFS. Maybe both can be replaced with gluster.
22:04 linagee iSCSI failover has never worked quite right from what I hear.
22:05 linagee (I mean in our specific implementation. probably just because of the complexity of it.)
22:14 a_ta_ joined #gluster
22:39 kminooie joined #gluster
22:40 nathwill joined #gluster
22:45 kminooie hi everyone, I recently upgraded ( about a day ago ) started using gluster client 3.6 with a gluster cluster 3.3  and after about a day (today) the cluster itself has started to misbehave ( bunch of process all over the place are in D stats waiting for a file to be read or a directory to be created. what I am trying to ask is are the client backward compatible? is there any chance the a client can affect a cluster like this
22:48 kminooie let me rephrase; is it safe to use a newer client (in this case v3.6  ) on an older cluster ( in this case 3.3 ) ?
22:56 cyberbootje joined #gluster
23:01 haomaiwang joined #gluster
23:02 al joined #gluster
23:08 woakes070048 joined #gluster
23:29 nathwill joined #gluster
23:34 mlhamburg_ joined #gluster
23:38 linagee can someone help with how far I should abuse version differences with glusterfs? glusterfsd 3.5.2 talking to glusterfs-client 3.0.5? 3.2.7? I guess "whatever works" when you're really short on space!
23:38 linagee (until I can upgrade my dist)
23:40 beeradb joined #gluster
23:42 linagee looks like 3.0.5 didn't work, 3.2.7 did work.
23:42 linagee (or at least it mounted, lol.)
23:43 Chr1st1an Would keep it on the same version if posible
23:43 linagee I would do all the same version, but breaks packages
23:43 linagee damn you libc6
23:43 linagee (until I can upgrade them to the same dist)
23:43 linagee (which should be soon)
23:44 Chr1st1an The pain of getting gluster up after an upgrade
23:45 linagee really weird. it looks like it mounts, but nothing is in there.
23:45 linagee I guess it will have to be the same version
23:45 kminooie linagee: I am having the same problem cluster is 3.3 clients are 3.3 3.5 3.6  (squeez and wheezy ). it is really a bummer that they don't backport the new versions for old debians
23:46 kminooie but I can say that newer client with older cluster seems to be working
23:46 linagee kminooie: my main node with the glusterfs storage is jessie (3.5) and I'm trying to talk with squeeze (came with 3.0, but even 3.2 backports doesn't seem to work.)
23:47 linagee kminooie: ack. very tempted to ditch 3.5 and go straight to 3.7 with jessie-backports.
23:47 kminooie I am using the packages from gluster repo. the one in debian are way old . I was saying that it is a bummer that gluster ppl don't backport their packages
23:47 linagee kminooie: "Close to 600 bug reports, a mix of feature requests, enhancements and problems have been addressed in a little over 1220 patches since July last year."
23:47 linagee kminooie: running old crap doesn't help stability one bit... DEBIAN! yes I'm talking to you DEBIAN! :-D
23:48 kminooie :)
23:48 linagee not like... "develop branches from git". but, one notch further back than that is the place to be. what developers of a particular thing call "stable".
23:49 kminooie i hear you
23:49 linagee intersting. is 3.6 stable or 3.7?
23:49 Chr1st1an Im going for 3.7.1
23:49 linagee (download page has 3.6, but 3.7 has just been "released")
23:49 kminooie i think 3.6 is the stable one
23:49 kminooie well it is the only one available for wheezy ( gluster repo )
23:50 linagee kminooie: that's because wheezy is going to die now. :)
23:51 linagee (jessie is debian stable as of like... last week or so. ;) )
23:51 Chr1st1an Anyone here having issues that gluster volumes are really slow to start after a complete reboot of all nodes?
23:51 kminooie :) i know
23:51 linagee upgrade your debian systems and enjoy the latest of everything!
23:51 Chr1st1an Take like 10-15min after the system is up and running before the volume can start
23:51 linagee upgrade your debian systems and enjoy the latest of everything! (from like... last year, lol)

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary