Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:20 caitnop joined #gluster
00:27 jeremyh joined #gluster
00:36 jeremyh1 joined #gluster
00:43 jeremyh1 joined #gluster
00:46 joseki joined #gluster
00:46 joseki hello, can someone explain to me cluster.shd-wait-qlength? does a larger or smaller value indicate more aggressive healing?
00:52 shdeng joined #gluster
00:53 shdeng joined #gluster
00:53 jeremyh joined #gluster
00:54 rafi joined #gluster
00:55 shdeng joined #gluster
00:56 shdeng joined #gluster
01:09 kimmeh joined #gluster
01:20 masuberu joined #gluster
01:46 Lee1092 joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:51 shubhendu joined #gluster
01:52 plarsen joined #gluster
01:53 harish joined #gluster
02:03 rafi joined #gluster
02:05 blu__ joined #gluster
02:13 xi4ohui joined #gluster
02:13 xi4ohui left #gluster
02:19 kramdoss_ joined #gluster
02:20 MugginsM joined #gluster
02:32 marlinc joined #gluster
03:01 masber joined #gluster
03:04 profc joined #gluster
03:04 Gambit15 joined #gluster
03:06 profc I'm scouring gluster docs but can't figure how whether it is possible for a non-stripe volume to contain a file larger than the brick size. 3.7.0 mentioned something about an experimental sharding feature but i can't find any docs on it.
03:08 kevc left #gluster
03:11 jeremyh joined #gluster
03:17 daMaestro joined #gluster
03:31 MugginsM joined #gluster
03:36 MugginsM joined #gluster
03:38 nbalacha joined #gluster
03:44 plarsen joined #gluster
03:52 JoeJulian Too bad profc was impatient...
03:53 magrawal joined #gluster
03:58 MugginsM joined #gluster
04:00 harish joined #gluster
04:06 RameshN joined #gluster
04:22 Muthu_ joined #gluster
04:23 shubhendu joined #gluster
04:25 riyas joined #gluster
04:25 gem joined #gluster
04:29 masber joined #gluster
04:33 ppai joined #gluster
04:35 xavih joined #gluster
04:36 malevolent joined #gluster
04:39 gem_ joined #gluster
04:42 skoduri joined #gluster
04:47 prasanth joined #gluster
04:54 rafi joined #gluster
04:54 ankitraj joined #gluster
04:58 itisravi joined #gluster
05:07 sanoj joined #gluster
05:07 sanoj joined #gluster
05:10 ndarshan joined #gluster
05:11 apandey joined #gluster
05:12 ppai joined #gluster
05:15 karthik_ joined #gluster
05:23 aravindavk joined #gluster
05:24 mhulsman joined #gluster
05:24 satya4ever joined #gluster
05:29 ankitraj joined #gluster
05:30 kotreshhr joined #gluster
05:36 morse joined #gluster
05:37 ramky joined #gluster
05:38 nishanth joined #gluster
05:39 nathwill joined #gluster
05:41 kukulogy joined #gluster
05:41 gem joined #gluster
05:41 gem_ joined #gluster
05:43 karnan joined #gluster
05:52 prth joined #gluster
05:54 Gnomethrower joined #gluster
06:07 ankitraj joined #gluster
06:07 ashiq joined #gluster
06:08 Muthu_ joined #gluster
06:08 Bhaskarakiran joined #gluster
06:08 kdhananjay joined #gluster
06:10 kdhananjay joined #gluster
06:10 msvbhat joined #gluster
06:14 kdhananjay left #gluster
06:15 jiffin joined #gluster
06:23 jtux joined #gluster
06:31 hgowtham joined #gluster
06:40 ppai joined #gluster
06:47 [diablo] joined #gluster
06:53 jri joined #gluster
06:56 hchiramm joined #gluster
07:11 prth joined #gluster
07:11 kimmeh joined #gluster
07:12 derjohn_mob joined #gluster
07:33 hackman joined #gluster
07:33 Slashman joined #gluster
07:34 jri joined #gluster
07:38 fsimonce joined #gluster
07:39 prasanth joined #gluster
07:49 kevein joined #gluster
07:56 k4n0 joined #gluster
08:06 mhulsman joined #gluster
08:11 kramdoss_ joined #gluster
08:18 gem joined #gluster
08:18 ankitraj joined #gluster
08:31 MikeLupe joined #gluster
08:32 prth joined #gluster
08:45 skoduri joined #gluster
08:55 derjohn_mob joined #gluster
09:08 MikeLupe joined #gluster
09:23 prth joined #gluster
09:24 social joined #gluster
09:32 jri_ joined #gluster
09:34 ankitraj joined #gluster
09:35 jri__ joined #gluster
09:40 msvbhat joined #gluster
09:42 Wizek joined #gluster
09:43 karthik_ joined #gluster
09:53 harish joined #gluster
09:59 hchiramm joined #gluster
10:01 Bardack grumf, i try to # gluster nfs-ganesha enable … and I directly get an error and asking me to check the logs … but logs are not providing any interesting info :(
10:01 Bardack any idea what I could check ?
10:07 mhulsman joined #gluster
10:11 Gnomethrower joined #gluster
10:32 lalatenduM joined #gluster
10:34 nishanth joined #gluster
10:43 devyani7 joined #gluster
10:44 devyani7 joined #gluster
10:51 masber joined #gluster
10:52 Saravanakmr joined #gluster
10:52 ankitraj joined #gluster
10:52 Philambdo joined #gluster
10:53 spalai joined #gluster
10:59 masuberu joined #gluster
11:01 Philambdo joined #gluster
11:13 harish joined #gluster
11:13 kimmeh joined #gluster
11:23 glusteruser joined #gluster
11:28 glusteruser hi, looking at https://gluster.readthedocs.io/en/la​test/Quick-Start-Guide/Architecture/ I am not sure how to interpret how a replicated volume works. I would like to set up a mailserver1 (mx:10) and mailserver2 (mx:20) configuration in which I share the vmail directory. As such, it is possible both servers receive mail while being online. Would choosing a replication work?
11:28 glusterbot Title: Architecture - Gluster Docs (at gluster.readthedocs.io)
11:29 glusteruser in other words: both server are online and my receive mail. Will mail received on mailserver2 be also synced with the mailserver1 this way?
11:30 Philambdo1 joined #gluster
11:31 kshlm glusteruser, It will be. But you should use ensure that you only use GlusterFS from the GlusterFS mount points.
11:32 kshlm You cannot directly access the bricks (or the backend directories).
11:32 arcolife joined #gluster
11:34 glusteruser @kshlm thanks. Great. I am aware of that. But that sounds great! :-) I don't know who made the graphics (I suck at it) but if there was an arrow between server 1 und 2 going both ways, that would make that perfectly clear. I am talking about https://cloud.githubusercontent.com/assets/1097099​3/7412379/d75272a6-ef5f-11e4-869a-c355e8505747.png
11:34 glusteruser thanks again!
11:36 kshlm Having arrows between servers wouldn't be correct, as the replication is done by the client (ie. glusterfs mount).
11:37 misc wouldn't there be issue regarding lock, or that's something gluster handle ?
11:37 glusteruser now that I am here, is it possible to have syncronous syncronisation between those the mailservers (both not being on the same LAN)
11:37 glusteruser the/two
11:38 kshlm misc, Gluster supports posix locking. So if the mailserver can co-ordinate using posix-locks, it should just work.
11:38 kshlm glusteruser, You could. But you wouldn't get good performance unless you have real low latency.
11:39 kshlm #announcement Weekly Community meeting starts in ~20 minutes on #gluster-meeting
11:40 Philambdo joined #gluster
11:41 glusteruser @kshlm for me, I thought it would be more effeicient to set up syncronous sync instead of like forcing asyncronous sync every 5 minutes or so. But this might just be wrong. As both server have lot latency, I might just try. That's probably the wisest.
11:42 glusteruser also, I have not understood so far how asyncronous sync via gluster is really more efficient than using rsync or unison (of course, that will probably eat more cpu time, but that is in this case not an issue)
11:43 kshlm glusteruser, Writes with GlusterFS replication are synchronous and only return once it's been completed on all servers. So latency is really important, especially when you're dealing with small files.
11:43 ramky joined #gluster
11:43 shubhendu joined #gluster
11:43 kshlm Geo-replication, which is the GlusterFS async replication solutions, uses rsync behind the scenes.
11:44 kshlm The intelligent bit is that, geo-replication keeps a track of changed files between intervals, and only rsyncs those files.
11:44 cloph_away but just for transfer, not for checking what did change. And you can also use plain tar for that
11:44 glusteruser @kshlm I read that in the manual. That is why I asked myself (only using one server and one backupserver) how this can really perform better
11:44 glusteruser I do understand now :-)
11:46 glusteruser thanks for all the expedient and fast answers! Have a great meeting.
11:47 pdrakeweb joined #gluster
12:05 kshlm #announcement Weekly community meeting is on now in #gluster-meeting
12:06 Sebbo2 joined #gluster
12:15 Jacob843 joined #gluster
12:24 pasik hello
12:24 glusterbot pasik: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answ
12:24 pasik any idea why gluster volume healing doesn't finish
12:24 pasik still 10 files to be healed/synced
12:24 pasik but it doesn't seem to finish..
12:25 pasik [2016-09-21 12:23:55.567198] E [MSGID: 113002] [posix.c:252:posix_lookup] 0-gvol1-posix: buf->ia_gfid is null for /bricks/vol1/brick1/foo [No data available]
12:25 pasik [2016-09-21 12:23:55.567250] E [MSGID: 115050] [server-rpc-fops.c:179:server_lookup_cbk] 0-gvol1-server: 244: LOOKUP /foo (00000000-0000-0000-0000-000000000001/foo) ==> (No data available) [No data available]
12:25 pasik errors like that..
12:25 pasik for the files that it should be healing
12:25 pasik (but doesn't heal)
12:27 morse joined #gluster
12:39 spalai left #gluster
12:43 Gnomethrower joined #gluster
12:46 pdrakeweb joined #gluster
12:47 ndarshan joined #gluster
12:47 mhulsman joined #gluster
12:48 johnmilton joined #gluster
12:51 shubhendu joined #gluster
12:51 thewall joined #gluster
12:53 unclemarc joined #gluster
12:53 thewall Hi everybody. New glusterfs user here. I need some advice on how to configure a 2 node cluster. Can someone help me?
12:55 johnmilton joined #gluster
12:59 skoduri_ joined #gluster
12:59 cloph_away you configure it by finding a third node to make it a lot easier to handle :-)
13:02 shyam joined #gluster
13:06 thewall mmm yeah. Anyway my doubts come on how to configure bricks on storage devices. Here some details. Each node has 3 HDD with 8TB each. I would like to manage my whole storage space on the two nodes to create gluster volumes (mainly to be used by virtual machines running on other nodes). I see that I should use LVM thin provisioning... but how? How many volume groups should I create on each node? One for each physical volume? A single one per node in
13:06 thewall cluding the three HDDs? I'm a bit confused... :S
13:07 thewall ps. why a three node cluster should be easier to handle the a two node one?
13:07 thewall than*
13:08 Klas since otherwhise split-brain risks or RO-periods are a necessity
13:08 guhcampos joined #gluster
13:08 Klas note that one of the servers can be just an arbiter, meaning, it does not need to store data
13:10 thewall Ok, thanks. Unfortunately I only have two available nodes.
13:11 Klas as for your question, you can pretty much do anything you'd like
13:11 Klas gluster just shares directories, so adapt accordingly
13:12 cloph_away thewall: no *need* at all for lvm.
13:12 Klas true, but advantageous to create a single volume out of all three disks (but that can be done in many ways)
13:13 cloph_away make sure the filesystem handles extended attributes, that you make inodes large enought to store them all into a single inode, and then the rest boils down on how you need/want to grow the volume in future
13:14 cloph_away you can create the bricks on a raid or on lvm, depending on your performance needs/your uptime requirements...
13:14 cloph_away and if you need to grow, you can either create an additional, separate volume, grow the lvm or add the new storage as additional brick.. Many ways to do it...
13:16 plarsen joined #gluster
13:17 Bardack i fail starting nfs-ganesha on my gluster server :( … first question: do i need to stop all native NFS shares before trying to start nfs-ganesha ? or they can work together ? I bed I must stop
13:21 profc joined #gluster
13:21 Bardack and well, even by disabling all other nfs shares, when i want to activate nfs-ganesha, i just got a failure asking to look in logs … but ganesha.log is not really providing anything interesting
13:22 cloph_away maybe not interesting to you....
13:22 cloph_away just pastebin the actual output...
13:23 thewall I'm sorry. Please excuse me, but I'm a real noob here. Here is my background: I followed the Quick Start Guide (http://gluster.readthedocs.io/en/l​atest/Install-Guide/Quick_start/). At the end it makes me create my first replica volume by selecting two bricks, which correspond to two whole disks with 8TB size. The created Gluster volume is, as expected, 8TB size. Ok, this is great. But how can I create a smaller volume? Let's say a 100GB one. My br
13:23 thewall icks are just huge!
13:23 glusterbot Title: Quick start to Install - Gluster Docs (at gluster.readthedocs.io)
13:23 cloph_away thewall: the size of the gluster volume corresponds to what diskspace is available on the brick directories.
13:24 cloph_away so if you want them smaller, you can create a dummy file of whatever size you like to take away that space ;-)
13:24 cloph_away or create the bricks on a smaller filesystem to begin with.
13:24 cloph_away but why would you want it smaller?
13:24 cloph_away need the space for other stuff as well?
13:27 cloph_away if you don't need the space for other stuff, you can consider setting up a raid0 then you'd only have 4TB available, but higher speed (but again depends on your requirements...). So first answer why want to be the volume only 100GB after talking about those 8TB disks...
13:28 thewall What i want to achieve is the following. I'd like to see my 6 HDDs as a whole space of 48TB and use it to create volumes for persistent storage for virtual machines (e. g. with Openstack Cinder).
13:28 hchiramm joined #gluster
13:29 cloph_away with replica you could only have 24 available for storage, as the other 24 are the replica/duplicates of the data...
13:29 thewall Let's say for example that I create a VM on openstack with a MongoDB database. For this database a disk space of 1TB is enough. With Cinder (and GlusterFS behind him) I would allocate a persistent volume with size 1TB for the VM hosting MongoDB
13:29 cloph_away but "whole space" contradicts with "I want only 100GB" - so still unclear to me what you actually want
13:30 shruti joined #gluster
13:30 rastar joined #gluster
13:32 shyam joined #gluster
13:33 thewall cloph_away: Thanks for your patience, and again sorry for noob questions.
13:40 mhulsman joined #gluster
13:42 msvbhat joined #gluster
13:44 kukulogy joined #gluster
13:44 mhulsman joined #gluster
13:45 profc joined #gluster
13:46 MikeLupe joined #gluster
13:51 skylar joined #gluster
13:52 kpease joined #gluster
13:55 bowhunter joined #gluster
13:57 bitchecker joined #gluster
13:59 Lee1092 joined #gluster
14:08 JoeJulian thewall: That "100GB" that you're looking for would be an image file that your vm will consume.
14:09 thewall JoeJulian: so, if I understand it right. GlusterFS would provide a large volume to cinder and cinder would split this space to provide any size block storage to VMs. Right?
14:10 thewall JoeJulian: thanks for answering
14:11 JoeJulian thewall: GlusterFS provides storage to cinder. Cinder creates images of whatever size you're defining on top of that storage, yes.
14:13 mreamy joined #gluster
14:21 rastar joined #gluster
14:25 squizzi joined #gluster
14:26 sturcotte06 joined #gluster
14:27 sturcotte06 hello
14:27 glusterbot sturcotte06: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually o
14:27 sturcotte06 We're using glusterfs extensively, and we're looking for a way to generate splitbrains
14:27 sturcotte06 For test pruposes
14:28 sturcotte06 We had a script that would prevent nodes form communicating because of a firewall rule, this worked in 3.4, but it was harder to reproduce in 3.7
14:28 sturcotte06 and it was not deterministic
14:29 sturcotte06 is there any way to create a splitbrain deterministically?
14:29 JoeJulian Yes, modify the ,,(extended attributes) on the bricks.
14:29 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/​2011/04/glusterfs-extended-attributes/
14:29 JoeJulian If you look at the source tree, there are tests that do just that.
14:30 JoeJulian afk for a bit running errands.
14:31 kukulogy joined #gluster
14:31 thewall cloph_away, JoeJulian : Thank you very much for your help. Now everything is clear ;) Bye everybody
14:32 sturcotte06 which xattr do I need to modify/set?
14:33 sturcotte06 i got security.selinux and trusted.gfid
14:33 sturcotte06 and I,m pretty sure I can modify neither
14:35 pa9tv joined #gluster
14:38 armin hi. say i want to examine how a glusterfs volume was built. the volume in place is actuall a stripe 2 replica 2 volume; but how would i find this fact out about it?
14:46 sturcotte06 can someone pinpoint me to some command I can made to make a synthetic split brain?
14:46 sturcotte06 it's really cryptic
14:46 sturcotte06 for someone who doesn't know the code
14:49 kramdoss_ joined #gluster
14:55 rastar joined #gluster
15:02 profc_ joined #gluster
15:03 sturcotte06 joined #gluster
15:03 sturcotte06 Could any one help me genrate a synthetic splitbrain?
15:04 sturcotte06 I'm following the documentation on readthedocs, and nothing happens
15:05 sturcotte06 I have a 3 node setup in a replicated volume
15:05 wushudoin joined #gluster
15:06 sturcotte06 the volume i'm trying to generate a split brain on is named gvappdata
15:06 sturcotte06 the bricks are located at /mnt/glusterfs/bricks/gvappdata
15:06 sturcotte06 the volume is located at /mnt/glusterfs/volumes/gvappdata
15:07 sturcotte06 on node 1:
15:07 sturcotte06 setfattr -n trusted.afr.gvappdata-client-0 -v 0x00ab03000000000001000000 /mnt/glusterfs/bricks/gvappadata/.installed setfattr -n trusted.afr.gvappdata-client-1 -v 0x000000000000000000000000 /mnt/glusterfs/bricks/gvappadata/.installed setfattr -n trusted.afr.gvappdata-client-2 -v 0x000000000000000000000000 /mnt/glusterfs/bricks/gvappadata/.installed
15:07 sturcotte06 on node 2:
15:07 sturcotte06 setfattr -n trusted.afr.gvappdata-client-0 -v 0x000000000000000000000000 /mnt/glusterfs/bricks/gvappadata/.installed setfattr -n trusted.afr.gvappdata-client-1 -v 0x0c0ef0000000000001000000 /mnt/glusterfs/bricks/gvappadata/.installed setfattr -n trusted.afr.gvappdata-client-2 -v 0x000000000000000000000000 /mnt/glusterfs/bricks/gvappadata/.installed
15:07 sturcotte06 on node 3: cat /mnt/glusterfs/volumes/gvappadata/.installed
15:07 sturcotte06 no split brain happenning
15:10 roost joined #gluster
15:10 sturcotte06 can anybody please help me generate a synthetic splitbrain?
15:10 rastar joined #gluster
15:13 sturcotte06 nevermind, I got it
15:13 sturcotte06 thanks for the help
15:15 kimmeh joined #gluster
15:22 nbalacha joined #gluster
15:44 Twistedgrim joined #gluster
15:48 prth joined #gluster
15:49 mreamy joined #gluster
15:58 hagarth joined #gluster
16:07 mlhess joined #gluster
16:09 mlhess joined #gluster
16:13 prth joined #gluster
16:16 MikeLupe joined #gluster
16:18 msvbhat joined #gluster
16:22 gem joined #gluster
16:24 Gambit15 joined #gluster
16:36 kramdoss_ joined #gluster
16:38 arcolife joined #gluster
16:38 derjohn_mob joined #gluster
16:53 primehaxor joined #gluster
16:54 arcolife_ joined #gluster
16:55 arcolife joined #gluster
16:59 nblanchet joined #gluster
17:01 jiffin joined #gluster
17:03 prth joined #gluster
17:03 sturcotte06 joined #gluster
17:04 sturcotte06 Hi, I'd like some precision on glusterfs 3.8's auto splitbrain heal
17:04 sturcotte06 can anybody help me
17:09 arcolife joined #gluster
17:11 cloph_away sturcotte06: well, the auto is that you can tell it to prefer the larger file or the newer timestamp...
17:12 cloph_away (or also pick one of the replicas as source, without having to dig deep and set the xattrs manually..
17:13 cloph_away https://gluster.readthedocs.io/en/latest/Troubl​eshooting/heal-info-and-split-brain-resolution/​#3-resolution-of-split-brain-using-gluster-cli - best to ask more specific questions to get better answer..
17:13 glusterbot Title: Split Brain (Auto) - Gluster Docs (at gluster.readthedocs.io)
17:18 hallkeften joined #gluster
17:18 hallkeften hi
17:18 glusterbot hallkeften: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually off
17:20 hallkeften I have a problem with a two node setup where one host is disconnected and i cant get it to "sync up" and i cant do anything on the other node as it doesent met the qourum
17:21 hallkeften I dont have any important data, but I woulde like to learn how to handle this
17:23 hallkeften volume reset: failed: Quorum not met. Volume operation not allowed.
17:26 cloph not sure what you mean with "sync up"
17:26 cloph if you  disable server quorum and the host that's up is the first  brick, you  could access the volume
17:27 skoduri joined #gluster
17:28 hallkeften two sec
17:29 skoduri_ joined #gluster
17:34 sturcotte06 Well what I want to know related to auto split brain resolution is whether it is supposed to be synchronous or asyncrhonous
17:34 sturcotte06 because I've created a synthetic split brain with xattributes
17:35 sturcotte06 I got: cat: .test: Input/output error
17:35 sturcotte06 with cluster.favorite-child-policy: mtime
17:35 sturcotte06 and the split brain does not get resolved
17:35 sturcotte06 are they healed through the self heal daemon?
17:36 sturcotte06 To me, auto-resolution means I won't ever get those dreaded Input/Output error
17:37 sturcotte06 ever again
17:37 sturcotte06 but it does not seem to be the case right now
17:39 hallkeften Ok im a total noob to gluster, thsi is my setup;
17:40 hallkeften I have two server B01SAN03 and B01SAN04 connected to a ovirt cluster
17:40 cloph so you did set  cluster.favorite-child-policy ?
17:40 hallkeften gluster> peer status
17:41 hallkeften Number of Peers: 1
17:41 hallkeften Hostname: B01SAN04
17:41 hallkeften Uuid: b6e264e2-8b53-413d-9371-517a52ed87c6
17:41 hallkeften State: Peer in Cluster (Disconnected)
17:41 hallkeften gluster> peer status
17:41 hallkeften Number of Peers: 1
17:41 hallkeften Hostname: B01SAN04
17:41 hallkeften Uuid: b6e264e2-8b53-413d-9371-517a52ed87c6
17:41 hallkeften State: Peer in Cluster (Connected)
17:41 hallkeften gluster> peer status
17:41 hallkeften Number of Peers: 1
17:41 hallkeften Hostname: b01san03
17:41 hallkeften Uuid: 14da068d-ef48-418e-a200-9620139e2fe2
17:41 hallkeften State: Peer in Cluster (Connected)
17:41 hallkeften gluster>
17:42 hallkeften i dont get it now they say connected both of them
17:43 hallkeften Ok both are connected but with only one peer, so that is splitbrain right?
17:43 cloph brick processes are independent of brick processes - so for volume state the volume  status would be of more interest, but use a ,,pastebin please
17:43 cloph !paste
17:43 * cloph cannot remember the bot's trigger for that...
17:44 cloph and depends on how/whether you did configure server quorum and client quorum.
17:44 hallkeften gluster> volume state gluster01
17:44 hallkeften volume statedump: failed: Locking failed on B01SAN04. Please check log file for details.
17:45 kkeithley @past
17:45 glusterbot kkeithley: I do not know about 'past', but I do know about these similar topics: 'paste', 'pasteinfo', 'pastepeer', 'pastestatus', 'pastevolumestatus'
17:45 kkeithley @paste
17:45 glusterbot kkeithley: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
17:45 hallkeften gluster> volume state gluster01
17:45 hallkeften volume statedump: failed: Quorum not met. Volume operation not allowed.
17:46 kkeithley (,,paste)
17:46 cloph ok, so you got server quorum enabled, but you don't want it.
17:46 hallkeften I have to server I woulde like to have them as failover
17:46 hallkeften two
17:46 hallkeften servers
17:46 hallkeften :)
17:47 hallkeften I'm so sorry for my bad spelling and grammar
17:48 cloph gluster isn't really "failover" - writes go to all replicas synchronously
17:49 skylar joined #gluster
17:49 renout_away joined #gluster
17:50 renout_a` joined #gluster
17:51 hallkeften Yeah i get that but i would like to have my ovirt system -> B01SAN03  and if that server is offline, point to B01SAN04
17:51 cloph https://github.com/lpabon/glusterfs/blo​b/master/doc/features/server-quorum.md for server quorum details - you need to set it to "none" - but of course this is risky
17:51 glusterbot Title: glusterfs/server-quorum.md at master · lpabon/glusterfs · GitHub (at github.com)
17:51 Dave____ joined #gluster
17:52 hallkeften Can i have 3 server where one is just a "witnes"
17:53 cloph yes.
17:53 kkeithley that's call arbiter
17:53 kkeithley in gluster
17:53 hallkeften now when i think i have a splitbrain how can i force one of the servers to be the master
17:54 hallkeften but in the state where im at i rly can add another nod yet as the cluster is broken :)
17:54 kkeithley [13:13:28] <cloph_away> https://gluster.readthedocs.io/en/latest/Troubl​eshooting/heal-info-and-split-brain-resolution/​#3-resolution-of-split-brain-using-gluster-cli - best to ask more specific questions to get better answer..
17:54 hallkeften cant
17:54 glusterbot Title: Split Brain (Auto) - Gluster Docs (at gluster.readthedocs.io)
17:54 cloph nah, arbiter is part of volume, but you  can also have a peer in the trusted pool that isn't part of any volume
17:54 nathwill joined #gluster
17:55 hallkeften thats what i want, i would just like to have a node prevent splitbrain
17:55 kkeithley yeah, that's called arbiter
17:55 hallkeften ok
17:56 cloph an additional server not participating in the pool will not prevent splitbrain (that only helps with ensuring server quorum) - arbiter does help, but then needs to be part of the volume and store metadata (no file contents, so disk-space requirements is minimal)
17:58 msvbhat joined #gluster
17:58 hallkeften volume heal gluster01 split-brain bigger-file
17:58 hallkeften (I don know what to type as FILE as this is a vm storage the mount point is /gluster01
18:00 hallkeften gluster> volume heal gluster01 split-brain bigger-file /gluster01
18:00 hallkeften gluster01: Not able to fetch volfile from glusterd
18:00 hallkeften Volume heal failed.
18:03 hallkeften should the file be the directory from the filesystem or from the volume? would /gluster01 in this case be /gluster01/gluster01
18:08 hallkeften Not able to fetch volfile from glusterd
18:08 hallkeften volume heal gluster01 split-brain source-brick B01SAN04:/gluster01
18:09 roost joined #gluster
18:12 hallkeften so im fucked then :)
18:15 hallkeften when i run  gluster peer status
18:15 hallkeften Number of Peers: 1
18:15 hallkeften Hostname: b01san03
18:15 hallkeften Uuid: 14da068d-ef48-418e-a200-9620139e2fe2
18:15 hallkeften State: Peer in Cluster (Connected)
18:16 jri joined #gluster
18:16 hallkeften server B01SAN03 tell me B01SAN04 is the only peer and when i run peer status on B01SAN04 it tells me that B01SAN03 is the only peer
18:17 hallkeften odd?
18:17 jri joined #gluster
18:30 sturcotte06 Hello, can someone explain to me how the favorite child policy is applied?
18:30 sturcotte06 is it sync/async?
18:30 sturcotte06 is it applied by the self heal daemon?
18:31 sturcotte06 is there any other settings that can interfere with this mechanism?
18:31 hackman joined #gluster
18:33 sturcotte06 It would be greatly appreciated if someone could answer
18:38 sturcotte06 crickets*
18:38 cloph hallkeften: bigger-file for VM storage won't really work - as they are static size
18:39 cloph even when sparse/not allocated to begin with
18:39 hallkeften hmm i dont care about the data i only want the cluster to get online
18:40 hallkeften the thing is that when i try to start the volume now i get this volume start: gluster01: failed: Locking failed on B01SAN04. Please check log file for details.
18:40 hallkeften when I do a peer status or pool list i dont get the result i have two server so it should be two peers right?
18:41 derjohn_mob joined #gluster
18:42 cloph not sure why you don't just do as gluster tells you...
18:43 cloph and no, it counts how many other machines it knows about.
18:43 hallkeften ok
18:43 hallkeften I try to read the logs but i dont get it
18:45 hallkeften 0-management: Lock not released for gluster01
18:47 hallkeften is this what i need to do?
18:47 hallkeften volume clear-locks
18:49 Dave joined #gluster
18:50 profc_ is there a strategy i can use that improves write performance? For example, client only writing to one replica instead of all three, and after bulk upload is finished, request a rebalance of some sort?
18:51 hallkeften Volume clear-locks unsuccessful
18:51 hallkeften Couldn't get port numbers of local bricks
18:52 hallkeften i will try to delet the volume en create a new one
18:54 rwheeler joined #gluster
18:57 hallkeften gluster volume create engine replica 2 arbiter 1 that will make the last node just right meta-data?
18:58 sturcotte06 can somebody help me with split-brain resolution? there's a lot of fishy thing happening on my 3.8 replicated cluster
19:01 renout_away joined #gluster
19:01 cloph yes, third one is arbiter
19:03 msvbhat joined #gluster
19:06 sturcotte06 I need some to explain me this:
19:07 gem joined #gluster
19:07 sturcotte06 gluster volume heal gvdata00 info split-brain
19:07 sturcotte06 @paste
19:07 glusterbot sturcotte06: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
19:15 cloph (you can of course use copy'n'paste to your favorite pastebin website (avoid ad-infested onese like pastebin.com though)
19:15 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
19:17 kimmeh joined #gluster
19:31 kpease joined #gluster
19:35 gem joined #gluster
19:36 kpease joined #gluster
19:37 johnmilton joined #gluster
20:05 mreamy joined #gluster
20:10 shyam joined #gluster
20:10 mreamy joined #gluster
20:11 derjohn_mob joined #gluster
20:38 nathwill joined #gluster
21:02 hackman joined #gluster
21:04 justinclift joined #gluster
21:05 justinclift Ugh.  Heads up for anyone using irssi: New remote exploits for it: https://irssi.org/security/irssi_sa_2016.txt
21:06 justinclift :(
21:07 justinclift left #gluster
21:18 roost joined #gluster
21:27 ashiq joined #gluster
21:37 hackman joined #gluster
21:39 hackman joined #gluster
21:40 Groink joined #gluster
21:43 hackman joined #gluster
21:48 hackman joined #gluster
21:55 roost joined #gluster
22:19 hackman joined #gluster
22:54 kukulogy joined #gluster
23:12 arcolife joined #gluster
23:18 kimmeh joined #gluster
23:40 jeremyh joined #gluster
23:40 hagarth joined #gluster
23:43 harish joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary