Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 sandersr joined #gluster
01:03 shdeng joined #gluster
01:04 dnunez joined #gluster
01:29 wadeholler joined #gluster
01:33 Lee1092 joined #gluster
01:36 Jacob843 joined #gluster
01:38 kramdoss_ joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:53 harish joined #gluster
02:25 kramdoss_ joined #gluster
03:03 Gambit15 joined #gluster
03:05 magrawal joined #gluster
03:15 alvinstarr joined #gluster
03:26 muneerse2 joined #gluster
03:31 kdhananjay joined #gluster
03:39 aspandey joined #gluster
03:51 shubhendu joined #gluster
03:52 itisravi joined #gluster
03:59 kdhananjay1 joined #gluster
04:00 daMaestro joined #gluster
04:07 atinm joined #gluster
04:08 riyas joined #gluster
04:08 kdhananjay joined #gluster
04:17 nhayashi_ joined #gluster
04:17 Gnomethrower joined #gluster
04:17 k4n0 joined #gluster
04:40 ndarshan joined #gluster
04:43 ppai joined #gluster
04:44 shubhendu joined #gluster
04:51 nbalacha joined #gluster
04:52 Jacob843 joined #gluster
04:54 ndarshan joined #gluster
05:05 karthik_ joined #gluster
05:18 [diablo] joined #gluster
05:21 prasanth joined #gluster
05:28 aspandey joined #gluster
05:29 karnan joined #gluster
05:29 eightyeight so, while i'm working through the gluster documentation (both on readthedocs, and redhat), a couple points of interesting:
05:29 eightyeight RFC 5737 defines 3 network ranges specifically set aside for documentation (not routable):
05:30 eightyeight TEST_NET_1 192.0.2.0/24
05:30 eightyeight TEST_NET_2 198.51.100.0/24
05:30 eightyeight TEST_NET_3 203.0.113.0/24
05:31 eightyeight RFC 2606 defines reserved domain names
05:31 eightyeight example.com, example.org, and example.net are set aside specifically for documentation
05:31 eightyeight there is also an IPv6 RFC for docs
05:31 eightyeight FYI
05:32 msvbhat joined #gluster
05:34 ankitraj joined #gluster
05:37 eightyeight so instead of `keystone.server.com', `server.example.com'
05:37 eightyeight instead of `10,20.30.40', `192.0.2.40'
05:37 eightyeight etc.
05:41 derjohn_mob joined #gluster
05:42 RameshN joined #gluster
05:45 jkroon joined #gluster
05:46 raghug joined #gluster
05:50 d0nn1e joined #gluster
05:50 skoduri joined #gluster
05:51 mhulsman joined #gluster
05:53 Bhaskarakiran joined #gluster
05:55 nbalacha joined #gluster
05:55 hgowtham joined #gluster
05:56 Saravanakmr joined #gluster
05:58 ppai joined #gluster
05:59 Muthu_ joined #gluster
06:00 ashiq joined #gluster
06:03 ndarshan joined #gluster
06:03 Philambdo joined #gluster
06:18 ramky joined #gluster
06:18 msvbhat joined #gluster
06:19 arcolife joined #gluster
06:24 guest joined #gluster
06:24 guest Is it possible use bd xlator with crypt xlator?
06:26 kotreshhr joined #gluster
06:27 jtux joined #gluster
06:29 shubhendu joined #gluster
06:32 csaba joined #gluster
06:39 satya4ever joined #gluster
06:40 harish__ joined #gluster
06:40 Manikandan_ joined #gluster
06:47 kshlm joined #gluster
06:50 msvbhat joined #gluster
06:54 ivan_rossi joined #gluster
06:54 jtux joined #gluster
06:56 kovshenin joined #gluster
06:56 jkroon joined #gluster
06:59 rastar joined #gluster
07:01 ju5t joined #gluster
07:03 an_ joined #gluster
07:07 jri joined #gluster
07:11 nbalacha joined #gluster
07:13 raghuhg joined #gluster
07:14 jith_ hi all,I want to replace one existing non-failure brick with the new one.. for this, should i stop the volume??
07:17 pur joined #gluster
07:18 [diablo] joined #gluster
07:19 devyani7 joined #gluster
07:21 poornima joined #gluster
07:28 Sebbo1 joined #gluster
07:30 arcolife joined #gluster
07:32 an_ joined #gluster
07:36 fsimonce joined #gluster
07:37 jiffin joined #gluster
07:43 Sebbo4 joined #gluster
07:44 Sebbo5 joined #gluster
07:49 hackman joined #gluster
07:55 ju5t joined #gluster
08:09 deniszh joined #gluster
08:12 Pupeno joined #gluster
08:14 mhulsman joined #gluster
08:23 derjohn_mob joined #gluster
08:28 prth joined #gluster
08:29 twisted` joined #gluster
08:32 aravindavk joined #gluster
08:38 mhulsman joined #gluster
08:39 kdhananjay joined #gluster
08:40 an_ joined #gluster
08:43 jtux joined #gluster
08:45 atalur joined #gluster
08:45 amye joined #gluster
08:54 atalur_ joined #gluster
08:56 ju5t joined #gluster
08:56 robb_nl joined #gluster
09:10 skoduri joined #gluster
09:17 wadeholler joined #gluster
09:21 kovshenin joined #gluster
09:24 fedele left #gluster
09:30 cliluw joined #gluster
09:36 legreffier joined #gluster
09:36 mhulsman joined #gluster
09:40 mhulsman joined #gluster
09:41 kovshenin joined #gluster
09:47 an_ joined #gluster
09:53 spalai joined #gluster
09:56 an__ joined #gluster
10:08 skoduri joined #gluster
10:23 an_ joined #gluster
10:25 wadeholler joined #gluster
10:25 shdeng joined #gluster
10:25 shdeng joined #gluster
10:33 shyam joined #gluster
10:33 archit_ joined #gluster
10:35 muneerse joined #gluster
10:47 atinm joined #gluster
10:51 msvbhat joined #gluster
11:04 poornima joined #gluster
11:10 masber joined #gluster
11:14 atinm joined #gluster
11:15 B21956 joined #gluster
11:21 cloph_away joined #gluster
11:22 dlambrig joined #gluster
11:23 masber joined #gluster
11:29 johnmilton joined #gluster
11:29 poornima joined #gluster
11:30 Klas is there a way to syslog everything in gluster? I'm only finding volume level syslogging currently.
11:37 aravindavk joined #gluster
11:45 karthik_ joined #gluster
11:48 jiffin1 joined #gluster
12:02 jdarcy joined #gluster
12:03 jiffin1 joined #gluster
12:28 ira joined #gluster
12:28 mhulsman joined #gluster
12:29 jith_ how to check whether glusterfs volume is mounted to a client?? how to check it from server???
12:31 Klas gluster vol status volname clients
12:31 Klas seems to be the answer
12:32 jiffin it works, may be difficult to make meaning details from that
12:34 Klas oh, how come?
12:34 Klas the information seems pretty clear-cut
12:35 Klas well, barring that it works on IP, not names
12:37 mhulsman joined #gluster
12:38 unclemarc joined #gluster
12:40 ben453 joined #gluster
12:40 jith_ Klas, thanks
12:40 archit_ joined #gluster
12:42 jith_ Klas, no it is showing something else
12:42 jith_ It is showing the brick details
12:43 Klas to me, it shows the mounted clients and the servers accessing each individual brick
12:43 Klas it's not very easy to parse it though
12:44 Klas it's kinda annoying that it displays servers as clients
12:44 jith_ oh ok.. thanks.. i havent mounted yet
12:44 jith_ will check
12:44 Klas haha
12:44 Klas yeah, if a client is mounted, it does show up
12:47 jith_ yes it is showing.. but along with glusterservers twice
12:47 jith_ what that glusterserver details?? brick's port number?
12:50 jith_ Clients connected : 5
12:50 jith_ Hostname                                               BytesRead    BytesWritten
12:50 jith_ --------                                               ---------    ------------
12:50 glusterbot jith_: ------'s karma is now -1
12:50 jith_ 10.244.110.11:49137                                          860             440
12:50 glusterbot jith_: -------'s karma is now -8
12:50 jith_ 10.244.110.11:49135                                         6612            4272
12:50 glusterbot jith_: ----------'s karma is now -4
12:50 jith_ 10.244.110.13:49134                                          860             440
12:50 jith_ 10.244.110.13:49132                                        52784           46336
12:50 jith_ 10.244.110.12:49145                                        22468           17464
12:50 cloph @paste
12:50 glusterbot cloph: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
12:51 jith_ only 10.244.110.12 is the only client
12:51 jith_ ok :)
12:51 jwd joined #gluster
12:59 shyam joined #gluster
13:00 mhulsman joined #gluster
13:07 derjohn_mob joined #gluster
13:07 Klas cloph: cool
13:09 Klas we are considering our update intervals, is there a good place to check what new patches are applied in a point release?
13:11 guhcampos joined #gluster
13:18 rastar joined #gluster
13:25 legreffier joined #gluster
13:28 squizzi joined #gluster
13:35 plarsen joined #gluster
13:38 jiffin1 joined #gluster
13:41 skylar joined #gluster
13:43 plarsen joined #gluster
13:45 kpease joined #gluster
13:49 jiffin1 joined #gluster
14:02 rastar joined #gluster
14:02 rwheeler joined #gluster
14:05 baojg joined #gluster
14:06 dlambrig joined #gluster
14:09 ira joined #gluster
14:14 kotreshhr left #gluster
14:15 spalai left #gluster
14:23 msvbhat joined #gluster
14:24 Muthu_ joined #gluster
14:24 nohitall joined #gluster
14:25 nohitall hi, im trying to mount a volume, i can telnet to the ip and port, but if I try to munt it with "mount -t glusterfs ip:/volumename /mnt" I get 0-glusterfs: failed to get the 'volume file' from server
14:26 nohitall v 3.7.9
14:29 prth joined #gluster
14:31 Manikandan_ joined #gluster
14:39 aravindavk joined #gluster
14:41 Vide joined #gluster
14:41 kdhananjay joined #gluster
14:42 Vide Hello, I'm going to setup a 3-machines oVirt cluster and I want to use GlusterFS as the data backend, and I was wondering with Gluster 3.8 what's the best way to configure disks for bricks
14:43 Vide I was reading https://videos.cdn.redhat.com/summit2015/presentat​ions/13767_red-hat-gluster-storage-performance.pdf and https://www.redhat.com/en/about/blog/red-hat-ann​ounces-new-capabilities-red-hat-gluster-storage for example
14:43 glusterbot Title: Red Hat Announces New Capabilities for Red Hat Gluster Storage (at www.redhat.com)
14:43 Vide but given that my servers will have (to start) 4 SSD disks each, I don't know if I can call those "JBODs"
14:45 derjohn_mob joined #gluster
14:46 Vide moreover, I want to have the OS behind gluster in a RAID configuration, so what about a classic RAID10 + replica-2 in my scenario?
14:48 nohitall is gluster that bad with lot of small files? I am doing local copy to gluster share and its <200kb/s
14:48 nohitall its just hundretthousands of small txts
14:51 Manikandan joined #gluster
14:51 jiffin joined #gluster
14:58 wushudoin joined #gluster
14:59 thwam joined #gluster
15:05 Pupeno joined #gluster
15:09 muneerse joined #gluster
15:12 ashiq joined #gluster
15:14 johnmilton joined #gluster
15:18 rafi joined #gluster
15:18 Arrfab atinm , ndevos : last week we spoke about renaming a gluster volume name by : stopping the volume, stopping glusterd, modifying all /var/lib/glusterd/* file names and content with new name, restarting glusterd
15:18 Arrfab gluster volume list shows the new name, but refuses to start it
15:19 atinm Arrfab, and what does glusterd log say?
15:21 Arrfab atinm: argh, I had to increase level, but it seems glusterfsd was still running and blocking the port
15:21 Arrfab so I had to stop it, then restart glusterd and the brick is working
15:21 Arrfab let me see if that works on all nodes
15:22 atinm OK
15:22 atinm Arrfab, I will be logging off, in case you face any further issue send us an email and we will get back
15:22 Arrfab atinm: thanks .. I think that is ok
15:23 jiffin1 joined #gluster
15:23 Arrfab wondering why glusterd/glusterfsd and the later blocking the tcp port (so glusterd complained about tcp transport, obviously)
15:23 Arrfab but I have a something that seems to work, so I'll write a blog post about it too
15:24 kovshenin joined #gluster
15:27 kovshenin joined #gluster
15:30 kshlm joined #gluster
15:32 eightyeight are soft quota limits supported? the RH docs say they aren't, but they're included in the output, and clearly configurable..
15:32 eightyeight so what's the story? are soft limits supports on quotas?
15:32 msvbhat joined #gluster
15:32 eightyeight s/supporst/supported/
15:33 glusterbot eightyeight: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
15:36 derjohn_mob joined #gluster
15:43 skoduri joined #gluster
15:44 ira joined #gluster
15:45 devyani7 joined #gluster
15:50 shubhendu joined #gluster
15:51 ivan_rossi left #gluster
15:55 jwd joined #gluster
15:58 dlambrig joined #gluster
16:02 robb_nl joined #gluster
16:06 shortdudey123 joined #gluster
16:16 derjohn_mob joined #gluster
16:17 muneerse2 joined #gluster
16:22 Gambit15 joined #gluster
16:32 jkroon joined #gluster
16:34 JoeJulian Klas: I just apply https://help.github.com/articles​/comparing-commits-across-time/ to https://github.com/gluster/glusterfs to check which patches are applied between tags.
16:34 glusterbot Title: Comparing commits across time - User Documentation (at help.github.com)
16:35 Klas JoeJulian: ok, thanks
16:36 JoeJulian nohitall: Yes, "hundretthousands of small txts" is not a good use case for *any* clustered filesystem unless you don't care if your file actually made it.
16:38 JoeJulian eightyeight: RH docs are for the "Red Hat Gluster Storage product" which is a downstream release. They may have removed soft limits from theirs.
16:38 JoeJulian Or they may be newer than what's released downstream.
16:41 Philambdo joined #gluster
16:47 dkalleg Anyone know how I can config gluster logs permissions?  Default, everything under /var/log/glusterfs/* is 0600, but I'd like to have them write out as 0640.
16:48 dkalleg I tried setting umask rules in upstart (ubuntu14.04, fyi), but no effect.
16:49 JoeJulian Create them at 640 first?
16:49 aravindavk joined #gluster
16:52 dkalleg Right.  I don't want to have to go chmod every log file.
16:53 JoeJulian Then you'll have to edit the source and compile your own packages.
16:53 dkalleg oh :-/
16:53 dnunez_ joined #gluster
16:53 JoeJulian Might make sense to file a bug for a feature request for a way to do that otherwise.
16:53 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
16:54 muneerse joined #gluster
16:54 JoeJulian Or just use syslogs?
16:55 JoeJulian I _think_ that's an option... I mean I know you can configure gluster to log to syslog. Not sure if you can disable the file logging though.
16:55 JoeJulian Other than to turn up the log-level to emergency.
16:56 JoeJulian or critical or whatever that is.
17:05 nohitall JoeJulian: thanks for reply. I got another issue: on my clients I can mount volume with mount -t glusterfs, but it works as nfs mount, any ideas?
17:05 nohitall nfs mount is fine for me too, its just reads
17:09 dkalleg thanks @JoeJulian
17:10 skoduri joined #gluster
17:11 cloph nohitall: you probably wrote something else than you were thinking. you can mount it as fuse, but it works as nfs mount? what should that mean?
17:12 JoeJulian nohitall: Yeah, I see no issue. Two positive results.
17:13 nohitall uh fuse then
17:13 nohitall I just wonder why I can't mount it with -t glusterfs
17:13 nohitall logs says 0-glusterfs: failed to get the  'volume file'
17:13 nohitall but if fuse mount works too its ok I guess
17:13 JoeJulian Ah, there's the "can't". You said "can" before which is why we were confused.
17:14 nohitall ah sorry
17:14 nohitall yea im stressed, working on setting everything up :D
17:14 jri joined #gluster
17:14 JoeJulian I'm sure.
17:14 JoeJulian @pasteinfo
17:14 glusterbot JoeJulian: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:14 nohitall I can already tell you that the clienst are behind a NAT so I assume that might be the issue
17:15 JoeJulian Yes, that is the issue.
17:15 nohitall https://dpaste.de/0DVT
17:15 glusterbot Title: dpaste.de: Snippet #378494 (at dpaste.de)
17:16 nohitall so fuse mount has no issues with NAT it seems
17:16 JoeJulian You mean nfs mount has no issues with nat. That is correct.
17:16 JoeJulian The fuse mount (mount -t glusterfs) does.
17:16 nohitall ok well I got a working solution so that is ok
17:17 nohitall from my benchs fuse mount had better read performance anyway
17:17 JoeJulian Personally, I would set up a tunneled layer 2, like vxlan, so there's no nat.
17:17 sandersr joined #gluster
17:17 mrten joined #gluster
17:18 nohitall not sure if worth the effort, its just for the mounted files for webserver
17:18 nohitall I try to keep them simple in setup
17:19 nohitall can you explain why the glusterfs mount does not work?
17:20 JoeJulian @mount server
17:20 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
17:21 JoeJulian So the *client* has to be able to open a connection to the servers (all of them).
17:22 nohitall but but that should work
17:22 nohitall but there is no negative issue with my solution?
17:22 nohitall or can I runinto issues
17:22 ashiq joined #gluster
17:23 nohitall btw is there a smart way to have a failover mount?
17:23 nohitall e.g. if my server dies where I mount from
17:23 nohitall to mount from the other one instead
17:24 emitor_uy joined #gluster
17:25 JoeJulian @rrnds
17:25 glusterbot JoeJulian: I do not know about 'rrnds', but I do know about these similar topics: 'rrdns'
17:25 * JoeJulian needs coffee
17:25 JoeJulian @rrdns
17:25 glusterbot JoeJulian: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
17:26 nohitall hail JoeJulian
17:26 nohitall Doumo arigatou gozaimasu!
17:26 nohitall thanks a lot
17:29 JoeJulian You're welcome.
17:30 el_isma joined #gluster
17:33 nohitall one last questio
17:33 nohitall will ut fuse mount unmount itself when server fails?
17:33 b0p joined #gluster
17:33 nohitall *the fuse
17:33 nohitall because the rrdns is only helpful if I have some script running to automount no?
17:34 JoeJulian No, and no. As the text above states, the client connects to that first server *only* to retrieve the volume definition. After that it connects to every brick in the volume.
17:35 JoeJulian So if one *server* goes down, the rest of the volume remains usable. If you've configured a volume with replication, that means that your data remains available.
17:36 nohitall but only if I use mount -t glusterfs right?
17:36 JoeJulian right
17:37 nohitall so I have to make the NAT work
17:37 JoeJulian If you mount with nfs, you'll want to do the same thing you would do for any nfs service, use a virtual ip or a load balancer.
17:37 nohitall is the issue I have due to missing port forwarding only?
17:38 JoeJulian Possibly. I've never seen anyone successfully fuse mount over nat.
17:38 emitor_uy Hi! I'm having some trouble with the gluster performance on IO.
17:39 nohitall JoeJulian: I am confused I thought I use fuse mount
17:39 JoeJulian "mount -t glusterfs" = fuse mount.
17:39 emitor_uy installed Gluster 3.7.13 in 4 servers with SAS HDD and I've configured a volume with replica 3 and arbiter 1, the volume looks like this:
17:39 emitor_uy Brick 10.11.0.130:/mnt/brick/brick1_1
17:39 emitor_uy Brick 10.11.0.177:/mnt/brick/brick1_2
17:39 emitor_uy Brick 10.11.0.95:/mnt/brick/brick1_3_arbiter
17:39 emitor_uy Brick 10.11.0.194:/mnt/brick/brick2_1
17:39 emitor_uy Brick 10.11.0.95:/mnt/brick/brick2_2
17:39 emitor_uy Brick 10.11.0.177:/mnt/brick/brick2_3_arbiter
17:39 emitor_uy We are doing some performance test through local disk, mountin through glusterfs and NFS, we got:
17:39 emitor_uy 300MB/s local disk
17:39 emitor_uy 6MB/s glusterfs
17:39 emitor_uy 24MB/s NFS
17:39 emitor_uy Why is so bad the performance?
17:39 JoeJulian @paste
17:39 glusterbot JoeJulian: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
17:39 nohitall JoeJulian: and mine is NFS mount?
17:39 JoeJulian nohitall: correct
17:40 nohitall JoeJulian: well that is what I meant i the start, so I was right to call it NFS mount :) ok thanks
17:40 JoeJulian emitor_uy: please use a paste service like mentioned above rather than filling up a chat channel with configs. :)
17:40 nohitall emitor_uy: gluster volume info
17:40 emitor_uy sorry
17:41 JoeJulian Does your test plan match your use case?
17:43 emitor_uy here is the gluster volume info: https://thepb.in/p/mwh1oO780Orh5
17:43 glusterbot Title: TheP(aste)B.in - For all your pasting needs! (at thepb.in)
17:44 Philambdo joined #gluster
17:44 nohitall JoeJulian: is it save to upgrade from 3.7 to 3.8 on one of the 2 glusterfs-servers? or do I better shut down both and upgrade
17:45 JoeJulian You can live upgrade, yes. Upgrade servers first (one at a time, make sure files are healed in between upgrading replicas) then upgrade the clients.
17:46 nohitall JoeJulian: Ihope you get paid
17:52 JoeJulian I am a ,,(volunteer)
17:52 glusterbot A person who voluntarily undertakes or expresses a willingness to undertake a service: as one who renders a service or takes part in a transaction while having no legal concern or interest or receiving valuable consideration.
17:56 nohitall thanks a lot, I'll try to give the knowledge further
17:56 nohitall *forward
17:58 JoeJulian nohitall++
17:58 glusterbot JoeJulian: nohitall's karma is now 1
18:00 nohitall I usually write tutorials on my blog for everything more complicated
18:00 nohitall not fully selfish, using them as reminder for myself :) bu at least its public
18:00 nohitall ehh *not fully selfless
18:00 JoeJulian Excellent. When you get something written up, let amye know and she'll get it syndicated.
18:01 nohitall well I wonder if its worth trying to get fuse mount working with NAT
18:01 nohitall or do just buy IPs and add bridge..
18:01 JoeJulian You can vxlan through nat and just use rfc1918 addresses.
18:02 JoeJulian I read someone's blog where they explained how to do it.
18:05 nohitall sounds horribly compllicated but I take a look
18:06 JoeJulian I promise, there's no horror involved. There's not even a little angst.
18:06 nohitall thanks for the tip
18:06 nohitall Ill need fuse mount so I gotat try
18:10 nohitall found a tutorial albeit yet not fully understand how it wrks
18:11 nohitall I'd have to bridge the vxlan interface meh
18:15 nohitall JoeJulian: on my default 2 replica volume, if a machine dies, will volume go into read only?
18:15 b0p left #gluster
18:16 JoeJulian Not unless you enable volume quorum.
18:16 nohitall will it be completely off?
18:17 nohitall glusterfs quorum is a bit confusing since it differentiates between server and client quorum
18:17 JoeJulian No, it'll just keep working. The danger being that if you're in netsplit and you have two clients that can each write to one replica, you'll end up in split brain.
18:17 JoeJulian server quorum is being deprecated, iirc, so don't worry about it.
18:17 nohitall hm true, maybe better not to have roundrobin for the mountpoint
18:18 nohitall better writes are always only on 1 node
18:18 nohitall i heard glusterfs improved a lot with selfheal though
18:18 JoeJulian No, writes go to *all* replica.
18:18 JoeJulian The client connects to *all* bricks in the server. The mountpoint is *ONLY* used to retrieve the volume definition.
18:19 nohitall im not using fuse remember?
18:19 post-factum JoeJulian: that could change with JBR, afaik
18:19 JoeJulian nohitall: Ah, ok, I thought you were still talking about wanting to configure fuse.
18:19 post-factum JoeJulian: also, are you sure about quorum?
18:19 JoeJulian post-factum: I'm sure it will.
18:20 nohitall maybe stupud questoin but if you have splitbrain, wouldnt it be simplest on merge to simply use the file that has been modified latest?
18:22 nohitall I figure the heal improvements use some smarter approach though :)
18:23 JoeJulian post-factum: https://botbot.me/freenode/gluster/​2016-07-14/?msg=69605388&amp;page=1
18:23 JoeJulian nohitall: It could be. Depends on what data you are willing to lose.
18:24 nohitall I am curious how gluster decides
18:24 JoeJulian @extended attributes
18:24 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/​2011/04/glusterfs-extended-attributes/
18:24 JoeJulian See #2
18:24 emitor_uy @JoeJulian: About your question, Does your test plan match your use case? The test was just a sequential write to the mountpoint
18:25 post-factum JoeJulian: that is the right quorum as I understand it, no matter how it is called
18:25 JoeJulian nohitall: Also http://gluster.readthedocs.io/en/latest/Trouble​shooting/heal-info-and-split-brain-resolution/
18:25 glusterbot Title: Split Brain (Auto) - Gluster Docs (at gluster.readthedocs.io)
18:25 JoeJulian post-factum: I don't grok.
18:25 nohitall thanks
18:46 kovshenin joined #gluster
18:48 Philambdo joined #gluster
18:49 msvbhat joined #gluster
19:16 hackman joined #gluster
19:31 deniszh joined #gluster
19:48 dlambrig joined #gluster
19:50 bluenemo joined #gluster
19:56 derjohn_mob joined #gluster
20:25 shyam joined #gluster
20:33 Pupeno joined #gluster
20:42 deniszh joined #gluster
21:01 olim joined #gluster
21:03 squizzi joined #gluster
21:03 dlambrig joined #gluster
21:07 bluenemo joined #gluster
21:07 d0nn1e joined #gluster
22:00 Klas joined #gluster
22:02 jwd joined #gluster
22:03 plarsen joined #gluster
22:08 hagarth joined #gluster
23:56 wadeholler joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary