Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 atrius joined #gluster
00:10 zhangjn joined #gluster
00:14 bluenemo joined #gluster
00:22 shyam joined #gluster
00:41 sankarshan_ joined #gluster
00:48 atrius joined #gluster
00:58 zhangjn joined #gluster
01:03 amye joined #gluster
01:05 EinstCrazy joined #gluster
01:07 EinstCrazy joined #gluster
01:26 hagarth joined #gluster
01:30 Lee1092 joined #gluster
01:45 hagarth joined #gluster
01:48 18WABROP8 joined #gluster
01:51 EinstCrazy joined #gluster
02:03 EinstCra_ joined #gluster
02:07 thisischris joined #gluster
02:08 EinstCrazy joined #gluster
02:09 harish_ joined #gluster
02:18 shyam joined #gluster
02:22 kanagaraj joined #gluster
02:34 thisischris Does writing directly to a brick on a replicated volume have any negative effects other than leaving the volume in an inconsistent state?
02:40 farhorizon joined #gluster
02:41 shyam joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:51 overclk joined #gluster
02:57 uebera|| joined #gluster
03:00 kanagaraj joined #gluster
03:03 farhorizon joined #gluster
03:05 zhangjn joined #gluster
03:06 hagarth joined #gluster
03:19 zhangjn joined #gluster
03:25 nangthang joined #gluster
03:33 EinstCrazy joined #gluster
03:33 harish_ joined #gluster
03:36 ramky_ joined #gluster
03:39 ppai joined #gluster
03:40 d0nn1e joined #gluster
03:48 kanagaraj joined #gluster
03:57 jiffin joined #gluster
03:59 atinm joined #gluster
03:59 nishanth joined #gluster
04:00 shubhendu joined #gluster
04:04 mowntan joined #gluster
04:20 jackdpeterson thisischris -- I'm fighting that right now -- though not intentionally I believe. It leaves thing in an inconsistent state and may not be easy to recover things if you have duplicated files donw the road as I understand. I'm no gluster expert, but what I do know is that's a no-no for the most part.
04:21 ramteid joined #gluster
04:21 RameshN joined #gluster
04:28 sakshi joined #gluster
04:29 ashiq_ joined #gluster
04:32 bharata-rao joined #gluster
04:33 Manikandan joined #gluster
04:39 kotreshhr joined #gluster
04:44 kanagaraj joined #gluster
04:47 anil joined #gluster
04:51 harish_ joined #gluster
04:54 RameshN joined #gluster
04:56 hagarth joined #gluster
04:57 ndarshan joined #gluster
05:02 kanagaraj_ joined #gluster
05:08 vmallika joined #gluster
05:16 kanagaraj joined #gluster
05:16 Apeksha joined #gluster
05:17 Apeksha joined #gluster
05:19 kanagaraj_ joined #gluster
05:19 kdhananjay joined #gluster
05:20 pppp joined #gluster
05:25 EinstCra_ joined #gluster
05:28 Bhaskarakiran joined #gluster
05:30 natarej joined #gluster
05:30 EinstCrazy joined #gluster
05:32 rafi joined #gluster
05:33 poornimag joined #gluster
05:35 zhangjn joined #gluster
05:35 atalur joined #gluster
05:40 aravindavk joined #gluster
05:40 nbalacha joined #gluster
05:46 Humble joined #gluster
05:47 skoduri joined #gluster
05:48 overclk joined #gluster
05:50 zhangjn joined #gluster
05:53 kanagaraj joined #gluster
06:01 hgowtham joined #gluster
06:01 vimal joined #gluster
06:03 kotreshhr joined #gluster
06:03 EinstCrazy joined #gluster
06:03 Saravana_ joined #gluster
06:04 kanagaraj joined #gluster
06:04 hos7ein joined #gluster
06:04 hchiramm joined #gluster
06:14 karnan joined #gluster
06:14 karnan_ joined #gluster
06:26 7YUAALTPU joined #gluster
06:26 kanagaraj joined #gluster
06:32 kotreshhr joined #gluster
06:35 zhangjn joined #gluster
06:38 zhangjn_ joined #gluster
06:39 EinstCrazy joined #gluster
06:45 EinstCra_ joined #gluster
06:56 [Enrico] joined #gluster
06:57 EinstCrazy joined #gluster
06:57 dusmant joined #gluster
06:57 EinstCrazy joined #gluster
07:00 EinstCra_ joined #gluster
07:06 haomaiwa_ joined #gluster
07:08 EinstCrazy joined #gluster
07:14 itisravi joined #gluster
07:14 mobaer joined #gluster
07:26 haomaiwa_ joined #gluster
07:27 mhulsman joined #gluster
07:27 mhulsman joined #gluster
07:30 zhangjn joined #gluster
07:30 jtux joined #gluster
07:34 inodb joined #gluster
07:41 EinstCra_ joined #gluster
07:46 drowe joined #gluster
07:58 haomaiwang joined #gluster
07:59 haomaiwang joined #gluster
08:01 haomaiwang joined #gluster
08:07 EinstCrazy joined #gluster
08:08 ivan_rossi joined #gluster
08:08 ivan_rossi left #gluster
08:11 unforgiven512 joined #gluster
08:12 EinstCrazy joined #gluster
08:16 zhangjn joined #gluster
08:30 dusmant joined #gluster
08:31 inodb joined #gluster
08:32 haomaiwang joined #gluster
08:39 Saravanakmr joined #gluster
08:43 haomaiwang joined #gluster
08:49 msvbhat win3
08:49 kdhananjay left #gluster
08:50 kdhananjay joined #gluster
08:56 ppai_ joined #gluster
08:58 zhangjn joined #gluster
08:58 atalur_ joined #gluster
09:00 harish_ joined #gluster
09:01 64MAAQ77D joined #gluster
09:02 kotreshhr joined #gluster
09:06 aravindavk joined #gluster
09:11 nbalacha joined #gluster
09:14 Sjors joined #gluster
09:17 Saravanakmr joined #gluster
09:18 haomaiwang joined #gluster
09:23 ctria joined #gluster
09:24 ramky joined #gluster
09:25 nangthang joined #gluster
09:31 sankarshan_ joined #gluster
09:41 sakshi joined #gluster
09:46 badone joined #gluster
09:49 Slashman joined #gluster
09:53 nbalacha joined #gluster
09:54 Saravanakmr joined #gluster
09:54 glafouille joined #gluster
09:55 vimal joined #gluster
09:55 anil joined #gluster
09:55 social joined #gluster
09:55 Vaelatern joined #gluster
10:01 Bhaskarakiran joined #gluster
10:01 atinm joined #gluster
10:06 MessedUpHare joined #gluster
10:07 haomaiwang joined #gluster
10:09 arcolife joined #gluster
10:17 pdrakewe_ joined #gluster
10:18 malevolent_ joined #gluster
10:25 R0ok_ joined #gluster
10:29 ira joined #gluster
10:29 kotreshhr joined #gluster
10:31 anil joined #gluster
10:33 dusmant joined #gluster
10:38 Bhaskarakiran joined #gluster
10:41 kovshenin joined #gluster
10:45 vmallika joined #gluster
10:46 kkeithley1 joined #gluster
10:47 Saravanakmr joined #gluster
10:47 kdhananjay joined #gluster
10:48 Bhaskarakiran joined #gluster
11:00 zhangjn joined #gluster
11:01 karthik_u joined #gluster
11:12 unforgiven512 joined #gluster
11:12 unforgiven512 joined #gluster
11:22 shyam1 joined #gluster
11:26 cholcombe joined #gluster
11:34 kanagaraj_ joined #gluster
11:39 badone joined #gluster
11:50 kanagaraj joined #gluster
11:54 dusmant joined #gluster
11:55 kotreshhr joined #gluster
11:56 muneerse joined #gluster
12:01 hgowtham joined #gluster
12:06 tswartz joined #gluster
12:14 zhangjn joined #gluster
12:16 Saravanakmr joined #gluster
12:38 ppai_ joined #gluster
12:45 EinstCrazy joined #gluster
12:52 kdhananjay joined #gluster
12:53 MessedUpHare joined #gluster
12:56 haomaiwa_ joined #gluster
12:56 unclemarc joined #gluster
12:56 bennyturns joined #gluster
12:57 muneerse joined #gluster
12:58 hgowtham joined #gluster
13:00 B21956 joined #gluster
13:06 haomaiwa_ joined #gluster
13:08 emitor joined #gluster
13:10 kdhananjay joined #gluster
13:10 jwd joined #gluster
13:10 atalur_ joined #gluster
13:12 dusmant joined #gluster
13:14 muneerse joined #gluster
13:14 cholcombe joined #gluster
13:15 lpabon_ joined #gluster
13:15 ppai_ joined #gluster
13:15 cristian_ joined #gluster
13:15 Bhaskarakiran joined #gluster
13:16 edong23 joined #gluster
13:17 ashiq_ joined #gluster
13:17 Bhaskarakiran joined #gluster
13:19 muneerse joined #gluster
13:30 amye joined #gluster
13:31 mbukatov joined #gluster
13:31 amye joined #gluster
13:32 Guest29589 left #gluster
13:33 shaunm joined #gluster
13:38 kanagaraj joined #gluster
13:39 zhangjn joined #gluster
13:39 zhangjn joined #gluster
13:44 atalur joined #gluster
13:45 kanagaraj_ joined #gluster
13:48 kanagaraj__ joined #gluster
13:49 arcolife joined #gluster
13:51 shyam joined #gluster
13:51 kotreshhr joined #gluster
13:52 atalur joined #gluster
13:55 farhorizon joined #gluster
13:57 sankarshan_ joined #gluster
13:59 haomaiwang joined #gluster
14:00 kanagaraj_ joined #gluster
14:01 haomaiwa_ joined #gluster
14:01 ira joined #gluster
14:02 Simmo joined #gluster
14:03 Simmo Hi All, and Happy New 2016 : )
14:05 Simmo Please, help me understand some basics of GlusterFS:
14:05 rwheeler joined #gluster
14:05 Simmo What should I do when a node restart ? When it happens then the "replication" stops
14:06 Simmo Not sure what I've missed :-/
14:07 ashiq__ joined #gluster
14:09 Simmo Should I re-mount with mount -t glusterfs <main node ip>:/<volume name> </path> ? Should I add that command to fstab ?
14:12 emitor Hi Simmo, what's your configuration?
14:12 Simmo Hi Emitor! : )
14:12 Simmo I have two nodes, one volume which consists of one brick
14:13 Simmo it is setup in replication mode
14:13 Simmo (each change on one machine should happen into the other as well)
14:13 emitor yes
14:13 Simmo I think I'm missing some basics
14:13 emitor but how are you mounting the glusterfs?
14:14 julim joined #gluster
14:14 Simmo I have mounted the bricks (and added the line to fstab)
14:14 Simmo then from a "node client" I used the command
14:14 Simmo mount -t glusterfs <main node ip>:/<volume name> </path>
14:15 emitor that's perfect
14:15 emitor and if you write something it doesn't replicates?
14:15 Simmo doh, actually it does now... '-_-
14:15 emitor hahaha
14:16 Simmo (i had to leave the mounting directory /mnt
14:16 Simmo and re-enter it
14:16 Simmo shame on me : )
14:16 Simmo I will try to restart the "server node"
14:16 Simmo now
14:16 emitor did you created some files when the node was off?
14:16 Simmo yes I did
14:16 emitor ok
14:17 Simmo and now i can saw them in the "client node"
14:17 Simmo I'm practicing a bit with GlusterFS before putting in production...
14:17 emitor the replication happens when some operation happens in the file
14:17 Simmo next step is to learn hot to add a new node to the volume : )
14:17 emitor or when the self heal process act
14:18 Simmo ah ok
14:18 emitor you should read about split-brain in gluster
14:18 Simmo it means that when a node goes offline then I need to "touch" files in order to get replicated when the node is back ?
14:18 emitor exactly
14:19 Simmo uh, I'll read about the split-brain logic, awesome
14:19 emitor greate!
14:19 Simmo Maybe it is better to "force" the self heal process if possible
14:20 Simmo isn't it ? :)
14:20 emitor I'm not sure
14:20 emitor I'm not an expert, just an user :D
14:20 emitor I'tink that this is a new feature
14:21 Simmo thanks anyway emitor, it helped a lot!
14:21 Simmo Cheers from Salzburg ;)
14:21 emitor I've just updated from gluster 3.4, in that version the only option was to open the files
14:21 emitor You'r welcome! From Uruguay ;)
14:21 Simmo I'm on 3.5 right now
14:22 Simmo I hope there's a force healing mechanism
14:22 emitor I guess in 3.7 there is
14:22 Simmo I would love to avoid to go recursive on the data : )
14:23 emitor volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [healed | heal-failed | split-brain] |split-brain {bigger-file <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]}] - self-heal commands on volume specified by <VOLNAME>
14:23 emitor volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [healed | heal-failed | split-brain] |split-brain {bigger-file <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]}] - self-heal commands on volume specified by <VOLNAME>
14:23 sankarshan_ joined #gluster
14:24 Simmo command from 3.7 ?
14:24 emitor yes
14:24 emitor but i'm not sure what it does
14:24 emitor this is what I use to do when I got split-brain
14:25 emitor find /mnt/nfs/gluster -noleaf -print0 | xargs --null stat >/dev/null
14:25 emitor this command do a stat of all the files in the mountpoint /mnt/nfs/gluster, where I used to had the volume mounted
14:26 emitor anyway it shouldn't be something  usual to do a server reboot
14:26 emitor I guess
14:26 Simmo great!
14:26 Simmo true : )
14:26 tswartz heh, always expect a reboot
14:26 Simmo but I don't want to discover consequences when it will happen in prod : )
14:27 emitor And before that you should delete the files modified that had an old version on the rebooted node
14:27 Simmo uh, I see
14:28 Simmo otherwise the oldest version can keep over the newer
14:28 emitor If there are all new files there is no problem
14:28 emitor no
14:28 emitor they wil be on an split brain status
14:28 Simmo ok, i need to read more about that : )
14:28 emitor you have to tell gluster somehow wich one is the correct
14:29 emitor yes, that would be good i guess
14:29 Simmo for example, now I've restarted the "main node"...
14:29 emitor but at least you have a lead now :P
14:29 Simmo and I found strange that I have to give the mount -t glusterfs etc command
14:29 Simmo on that node
14:29 Simmo in order to get files back
14:30 emitor I'm not getting the problem
14:30 emitor you have the glusterfs mounted on the nodes also?
14:30 emitor or are you talking about a client?
14:31 Simmo about the one that i consider "server node"
14:31 Simmo even if in glusterfs I understand that the definition is not properly correct
14:31 Simmo kind of there is no really a "server node"
14:40 Simmo lovely command: gluster volume heal myvol1 info split-brain
14:40 Simmo :-*
14:40 emitor haha
14:42 Simmo I'm discovering a new world! :D
14:42 Simmo sweet also that: gluster volume heal myvol1
14:45 hamiller joined #gluster
14:47 Apeksha joined #gluster
14:47 arcolife joined #gluster
14:49 tswartz so after getting all of my stuff setup, i just realized that the package that got installed is 3.7.6. are there going to be any problems downgrading to stable from here?
14:55 plarsen joined #gluster
14:57 cholcombe joined #gluster
15:00 coredump joined #gluster
15:01 nbalacha joined #gluster
15:07 hagarth joined #gluster
15:10 farhorizon joined #gluster
15:13 mobaer1 joined #gluster
15:23 m0zes joined #gluster
15:24 rwheeler joined #gluster
15:25 bowhunter joined #gluster
15:28 chirino joined #gluster
15:29 kbyrne joined #gluster
15:30 kbyrne joined #gluster
15:32 m0zes joined #gluster
15:35 Manikandan joined #gluster
15:41 dblack joined #gluster
15:43 ramky joined #gluster
15:48 rafi joined #gluster
15:50 sankarshan_ joined #gluster
15:50 Simmo Question: on a volume of Type: Replicate, can I have 3 nodes ?
16:03 tswartz Simmo, sure
16:03 Simmo Just implemented : )
16:03 Simmo Thanks tswartz ;)
16:04 Simmo I got scared by an error message but with some googling I solved it '-_-
16:07 julim joined #gluster
16:08 jackdpeterson joined #gluster
16:09 kbyrne joined #gluster
16:09 jackdpeterson Hello all, resuming from yesterday's GlusterFS weirdness, I'm not seeing 1446 heal-failed entries listed in the latest Index runs. Most of these are gfid's. I have a replica 3 configured (no distribute) and server-side quorum enabled at fixed 2. What can I do to get these entries properly healed?
16:09 jackdpeterson *now
16:09 jackdpeterson When I run the heal info bit, one node has substantially fewer entries than the other two.
16:10 shubhendu joined #gluster
16:12 ekzsolt left #gluster
16:12 matclayton joined #gluster
16:13 muneerse joined #gluster
16:19 Simmo left #gluster
16:22 liviudm joined #gluster
16:24 farhorizon joined #gluster
16:26 jeek joined #gluster
16:31 nishanth joined #gluster
16:40 wushudoin joined #gluster
16:42 shaunm joined #gluster
16:43 jwaibel joined #gluster
16:43 jwd_ joined #gluster
16:45 klaxa joined #gluster
16:45 18VAANCVU joined #gluster
17:04 skoduri joined #gluster
17:06 amye joined #gluster
17:13 jiffin joined #gluster
17:19 calavera joined #gluster
17:29 mobaer joined #gluster
17:37 JesperA joined #gluster
18:12 ashiq__ joined #gluster
18:14 ramky joined #gluster
18:16 uebera|| joined #gluster
18:21 mhulsman joined #gluster
18:21 anil joined #gluster
18:29 emitor Hi jackdpeterson, did you rebooted some of the gluster servers?
18:32 Rapture joined #gluster
18:34 RayTrace_ joined #gluster
18:34 vimal joined #gluster
18:41 javi404 joined #gluster
18:55 plarsen joined #gluster
19:02 farhorizon joined #gluster
19:08 julim joined #gluster
19:10 diegows joined #gluster
19:35 RayTrace_ joined #gluster
19:37 dblack joined #gluster
19:41 julim joined #gluster
19:46 ira joined #gluster
19:47 matclayton joined #gluster
19:50 calavera joined #gluster
19:54 mtanner joined #gluster
19:55 calavera_ joined #gluster
19:57 farhorizon joined #gluster
19:59 amye joined #gluster
20:01 raghu joined #gluster
20:20 shaunm joined #gluster
20:24 mhulsman joined #gluster
20:25 m0zes_ joined #gluster
20:25 hagarth joined #gluster
20:26 neofob joined #gluster
20:27 matclayton joined #gluster
20:27 atrius joined #gluster
20:41 jwd_ joined #gluster
20:43 d0nn1e joined #gluster
20:47 timotheus1 joined #gluster
20:47 mhulsman joined #gluster
20:50 amye joined #gluster
21:13 jwd joined #gluster
21:33 shyam left #gluster
21:44 dlambrig joined #gluster
21:46 refj joined #gluster
21:53 refj Is a replicated dristibuted glusterfs setup (2 x 2 = 4) a good choice for a drupal cluster? I'm asking because I'm seeing suboptimal performance when executing php from a gluster client mount point.
22:02 jonba_000 joined #gluster
22:02 jonba_000 hey so this is a long shot, but does anyone know if glusterfs geo-replication supports multi-master ?
22:06 farhorizon joined #gluster
22:10 rwheeler joined #gluster
22:21 mhulsman joined #gluster
22:27 hagarth joined #gluster
22:42 amye joined #gluster
22:54 dgbaley joined #gluster
22:57 emitor joined #gluster
23:18 ghenry joined #gluster
23:18 ghenry joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary