Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:31 abyss^ joined #gluster
00:34 Alghost joined #gluster
00:36 Alghost joined #gluster
00:48 d0nn1e joined #gluster
00:55 kramdoss_ joined #gluster
00:58 sandersr joined #gluster
01:00 shdeng joined #gluster
01:03 Alghost joined #gluster
01:04 stopbyte joined #gluster
01:19 sandersr joined #gluster
01:36 Lee1092 joined #gluster
01:40 kramdoss_ joined #gluster
01:50 wadeholler joined #gluster
01:57 harish joined #gluster
02:22 harish joined #gluster
02:33 aravindavk joined #gluster
02:35 jkroon joined #gluster
02:40 rafi joined #gluster
02:58 gluytium joined #gluster
03:19 scc joined #gluster
03:23 hchiramm joined #gluster
03:30 harish joined #gluster
03:37 magrawal joined #gluster
03:38 atinm joined #gluster
03:39 sanoj joined #gluster
03:43 itisravi joined #gluster
03:48 rafi joined #gluster
03:55 riyas joined #gluster
04:21 masber joined #gluster
04:27 shubhendu joined #gluster
04:29 karthik_ joined #gluster
04:32 satya4ever joined #gluster
04:33 ZachLanich joined #gluster
04:41 sanoj joined #gluster
04:42 ppai joined #gluster
04:44 kdhananjay joined #gluster
04:49 nbalacha joined #gluster
04:59 aspandey joined #gluster
05:08 ankitraj joined #gluster
05:11 aravindavk joined #gluster
05:12 skoduri joined #gluster
05:18 rafi joined #gluster
05:19 gem joined #gluster
05:19 jith_ joined #gluster
05:19 jiffin joined #gluster
05:20 prasanth joined #gluster
05:23 jith_ hi all, i want to install glusterfs stable one? which version i can install in 3.7.. i mean i saw 3.7.1 to 3.7.12
05:26 Klas up to 3.7.14, there is also 3.8.1 or something along those lines
05:27 Klas I've yet to see anything about a "stable" version, it's more of a project where a bit too much is happening, feature-whise, for this to make sense
05:28 Klas I might be wrong though, btw
05:28 jith_ Klas: ok thanks
05:28 Klas I'm still pretty fresh ;)
05:28 RameshN joined #gluster
05:28 atinm joined #gluster
05:29 jith_ its ok
05:30 ndarshan joined #gluster
05:33 itisravi jith_:  Also, whatever major version of the release you decide to use, go for the latest minor version in that release.
05:38 jith_ itisravi: yes thanks
05:39 skoduri joined #gluster
05:40 armyriad joined #gluster
05:41 hchiramm joined #gluster
05:42 Klas yeah, I tried to start a ticket with ubuntu regarding glusterfs since they are just choosing the latest version existing when they lock versions and stick with it, and, well, that's really not a good idea
05:45 ramky joined #gluster
05:46 jith_ I read like all mount points must be unique throughout the entire storage pool for eg /data/.. then how i could use two device(sdb and sdc) attached to the same machine??
05:47 Klas do you mean for the same volume or for different ones?
05:48 ZachLanich joined #gluster
05:48 kotreshhr joined #gluster
05:48 Klas if same, you need to use LVM or other such tools, if it's for different volumes, you can do whatever you like
05:48 Klas s/if same/if different/
05:48 glusterbot What Klas meant to say was: if different, you need to use LVM or other such tools, if it's for different volumes, you can do whatever you like
05:52 jith_ i am having two devices sdb and sdc in one machine and sdb in other two machine.. i want to create a volume like server 1's sdb and sdc is data and other two servers, server2(sdb) server3(sdc) as first servers replica
05:52 jith_ server3(sdb)*
05:53 Klas wait, you want to replicate the same dataset to the same server twice?
05:53 Klas why would you want to do that?
05:54 jith_ no no i want to replicate the first server data to other two servers.. first server is having two disk and other two is having only one
05:54 mhulsman joined #gluster
05:56 jith_ i understand like i need to create LVM volumes with different volume groups right for the first server?
05:57 Klas so, you want a replica 3, one of the servers with two disks then?
05:57 Klas that is no issue
05:58 Bhaskarakiran joined #gluster
05:58 Klas and, yeah, you need LVM or other such tools to use both disks
05:58 jith_ replica 2
05:58 Klas then you need 4 servers
05:59 Klas glusterfs requires a multiple of replica number of servers
05:59 Klas 2 means 2, 4, 6, 8 and so forth
05:59 Klas 3 means 3, 6, 9 and so forth
05:59 jith_ server 1(sdb and sdc) will be the content.. and sdb of server 1 will be replicated to sdb of server2 and sdc of server1 will be replicated to sdb of server 3
06:00 jith_ i have three servers but total 4 disk
06:00 Klas ah, you want two volumes replica 2
06:00 Klas then you don't need lvm
06:01 jith_ one glusterfs volume with two replica config..
06:01 karnan joined #gluster
06:01 jith_ volume type is distribute-replica
06:02 Manikandan joined #gluster
06:03 jith_ my create volume command looks like "gluster volume create gvol replica 2 transport tcp \
06:03 jith_ test1:/data/gluster/gvol0/brick1 \
06:03 jith_ test2:/data/gluster/gvol0/brick1 \
06:03 jith_ test1:/data/gluster/gvol1/brick1 \
06:03 jith_ test3:/data/gluster/gvol0/brick1"
06:04 jith_ is this right?
06:04 nbalacha joined #gluster
06:05 jith_ mount point is /data/gluster/gvol0/
06:05 Klas I've never used "transport tcp", so I can't help with that part
06:05 Klas but, you are creating one volume there
06:05 jith_ yes
06:05 Klas what you seem to want is two volumes, one for share1, one for share2
06:06 Klas where sdb1 on server1 and sdb1 on server2 match
06:06 Klas and sdc1 on server1 and sdb1 on server3 match
06:06 Klas is that correct?
06:06 jith_ no i want only one volume
06:06 Klas it might be that I'm missing something
06:06 Klas ok, then I have no idea what you want exactly, sorry, I'm probably missing something
06:07 jith_ is that syntax right?
06:07 jith_ i mean share1 can have any number of bricks right?
06:08 Klas I have no idea what you are trying to do and thus if it's possible
06:08 Klas two bricks on same server in same volume seems VERY strange to me
06:09 om hi all.  I had to rebuild the glusterfs master brick of replica 4.  Once I added the new brick, I started the healing process.  Unfortunately, though, all the glusterfs and nfs (provided by glusterfs) show the data from the master that does not have the data on it yet because it is healing.  Is there any way to work around this and have the gluster client mounts show the data that is actually on the other 3 replicas instead??  This data behavior was an unexpected
06:09 om situation and is causing downtime...  I disabled nfs on the volume and am temporarily working around this by using nfs kernel server to export the data from the a brick that has the complete data...  Any ideas why this is happening?  Any way to fix this on gluster client mounts without using nfs kernel server to work around it?  It was an unexpected side effect of rebuilding a brick...
06:09 om whoa.  Sorry for the super length...
06:09 jith_ Klas, i am having its replica na
06:09 Klas om: I actually have the same issue, what version are you running?
06:10 om 3.7.14
06:10 om I believe...
06:10 Klas same for me
06:10 Klas hrm
06:10 ashiq joined #gluster
06:10 om gosh... what's the likelihood of this happening to 2 people at the same time in irc...
06:11 om what did you do to work around it Klas
06:11 Klas om: lab systems, not working around it atm
06:11 om so your waiting for the healing process to complete then...
06:12 om I can't it's production
06:12 R0ok_ joined #gluster
06:12 Klas I fully understand that
06:12 Klas I got this link yesterday from post-factum https://joejulian.name/blog/replacin​g-a-glusterfs-server-best-practice/
06:12 glusterbot Title: Replacing a GlusterFS Server: Best Practice (at joejulian.name)
06:13 armyriad joined #gluster
06:13 om so strange... you would think that the glusterfs servers would know that the healing brick should be taken out of the pool while it heals and not only that but why does it show the data on the healing brick instead of the bricks that are healthy??
06:13 Klas my plan is to test different ways of replacing a lost brick and see if any of the other ways works better
06:13 Klas om: yes, according to post-factum, this is intended
06:13 om fml
06:13 om how does this make sense??
06:13 Klas sorry, you misunderstood what I meant
06:14 Klas intended behaviour is to up all the time
06:14 Klas while heal is being performe
06:14 Klas d
06:14 om right... but it's not happening...
06:14 Klas in my case, we tried to replace the brick by messing about with ID of the peer, we thought that this created the issue
06:15 om Klas, do you work with gluster project?  You mentioned lab...
06:15 Klas and we want to try different ways of restoring it and see what happens
06:15 Klas ah, no, not at all, but our systems are not in production yet =)
06:15 Klas so we call that instance lab
06:15 Klas lab->test->prod
06:16 jith_ Klas: thanks, so u mean only one brick should be used from one server for the same volume??
06:16 hgowtham joined #gluster
06:16 om so weird though... because now some of the glusterfs mounts are showing correctly
06:16 Klas jith_: to me, this seems reasonable, yes, can't see why you would want anything else
06:17 om it's like, if the client mounts from the wrong brick node (which is what was happening not matter what hostname brick I gave it) it will show the healing brick data which is incomplete fs
06:18 Klas yup, same here, it only shows the data from the non-complete node, meaning, we believe it wants verification that data exists on all nodes or some such
06:19 [o__o] joined #gluster
06:19 om xml
06:19 om xml
06:19 om fml
06:20 om sorry, for the auto-correct
06:21 om wonder if it makes more sense to replace a brick while the volume is OFF and just rsync to the brick directory...  will probably be quicker
06:21 om what a mess
06:21 om I wish someone from gluster would chime in on this
06:22 Klas yeah, well, community support is not 24/7
06:22 jith_ Klas, so i cant use the two disk in fist server?? sorry i m newbie
06:22 Klas jith_: I don't even understand what you want to do, so I can't answer that
06:22 om jith_:  if you have 2 disks on one server, you can only use each disk for separate volumes. afaik
06:23 Klas jith_: what om said, except you can make a virtual disk out of several disks
06:23 om with gluster?
06:24 om you mean with lvm or what?
06:24 jith_ Klas, i want to make use of the two disk.. that is the purpose
06:24 Klas om: yup
06:24 om jith_: you have to physical disks?
06:24 nbalacha joined #gluster
06:24 Klas in one server, than two servers with one disk
06:24 Klas and wants a replica 2
06:24 Muthu_ joined #gluster
06:25 Klas but the rest is very unclear
06:25 om you want to use each for a brick of the same volume?  I don't think that's possible
06:25 om what's the point of that anyway?
06:25 Klas I fail to see the point as well
06:25 om if this server is a bare metal dedi, just use a hypervisor and create vm's on it
06:26 jith_ om, yes
06:26 jith_ om, ok
06:26 om jith_: you should clarify your goals.  It seems you are trying to do something with glusterfs that it is not intended to do
06:26 Klas agreed
06:27 jith_ ok :)
06:27 jith_ all the disk should be of same size?
06:27 om are you trying to have a clone of the data automated between 2 physical disks on the same server?
06:28 om there are other utilities for that... don't use gluster for that
06:28 om for gluster, yes, all bricks on a volume should be the same size
06:28 om ideally
06:28 Klas barring arbiter of course
06:29 jith_ actually i want to give a glusterfs volume as a backend for openstack-cinder
06:29 om I don't think gluster cares.  But if you run out of space on one volume because it's smaller than the other, then you have a problem
06:29 om whoa
06:29 om I would NOT do that
06:30 om openstack cinder works best on iscsi afaik
06:31 om mucking with glusterfs for openstack cinder... Not a good idea
06:32 harish joined #gluster
06:35 ic0n_ joined #gluster
06:39 jtux joined #gluster
06:40 prasanth joined #gluster
06:43 kshlm joined #gluster
06:44 devyani7 joined #gluster
06:45 prasanth joined #gluster
06:45 jtux joined #gluster
06:51 prasanth joined #gluster
06:55 kovshenin joined #gluster
07:05 derjohn_mob joined #gluster
07:06 msvbhat joined #gluster
07:15 jri joined #gluster
07:18 [diablo] joined #gluster
07:25 fsimonce joined #gluster
07:28 ivan_rossi joined #gluster
07:40 owlbot joined #gluster
07:41 Pupeno joined #gluster
07:42 devyani7 joined #gluster
07:45 kovshenin joined #gluster
07:47 derjohn_mob joined #gluster
07:53 gem joined #gluster
07:53 shdeng joined #gluster
07:53 shdeng joined #gluster
08:00 deniszh joined #gluster
08:02 jkroon joined #gluster
08:07 ahino joined #gluster
08:12 atalur joined #gluster
08:12 Pupeno joined #gluster
08:13 archit_ joined #gluster
08:14 [diablo] Good morning #gluster
08:15 Smoka joined #gluster
08:15 [diablo] guys, when using native client, is there a way to speed the handover when a node drops out of the pool. For example if the native client connected initially to 192.168.1.1 (with a pool containing 2 x nodes, 192.168.1.1 and 192.168.1.2), the time it takes to establish a connection to 192.168.1.2 seems around 10-15 ~ seconds from my tests at home
08:20 Klas that sound very long
08:20 [diablo] hey Klas
08:20 [diablo] yeah actually we've got RHGS ...
08:20 Klas for me, it's less than a second
08:21 [diablo] but the test I did was on 2 x Ubuntu VM's on VMware Fusion
08:21 Klas I haven't tried with large volumes though
08:21 pur joined #gluster
08:22 Klas my tests are done on a normal vsphere installation between two different datacenters in the same area (about 500 meters apart)
08:23 Klas but, as I said, 10-15 seconds sounds really high, I will want to test this myself with my volume with more data (1.7 TB) when it's up and running
08:24 Klas my tests where done with the following test:
08:24 Klas while true; do date >> date; sleep 0.5; done
08:24 Klas btw
08:24 Klas or something along those lines
08:24 [diablo] OK cool
08:24 Klas when losing one of the servers, I never lost a full second
08:24 Klas barring when doing stuff with SSL, that never worked very well and we decided to wait atm
08:25 Klas (and not use SSL that is)
08:25 [diablo] Klas, you know a way of taking just one node from a volume (So leaving all the other volumes with their normal replication)
08:25 [diablo] or must I knock out a node in it's entirety
08:26 Klas I've been purposefully unkind
08:26 Klas killing all procesess, rebooting, pulling power
08:26 [diablo] :)
08:27 Klas that is how it will happen in reality, so why should I test in any other way =P?
08:27 [diablo] LOLLL
08:27 [diablo] valid point
08:27 Klas I suppose mucking about with IP-tables and such could be valid as well
08:27 [diablo] yup
08:27 [diablo] that's always a good test.. iptables blocking
08:28 Klas or shut off the interface, if it's dedicated
08:28 Klas or if you are running from console
08:28 Klas also valid
08:29 Klas I haven't tried the network approaches so far, I leave that to the other guy on the project since he's the network guy amongst us ;)
08:29 hchiramm joined #gluster
08:29 [diablo] must really try the failover speed
08:29 [diablo] gotta fixup some box, then I'll have a hack at it
08:29 [diablo] cheers Klas
08:34 hackman joined #gluster
08:40 markd_ joined #gluster
08:43 jtux joined #gluster
08:48 Pupeno joined #gluster
08:48 victori joined #gluster
08:50 deniszh1 joined #gluster
08:56 rastar joined #gluster
09:01 atalur joined #gluster
09:01 Saravanakmr joined #gluster
09:05 Manikandan_ joined #gluster
09:08 karthik_ joined #gluster
09:12 gluytium joined #gluster
09:12 muneerse joined #gluster
09:17 Gnomethrower joined #gluster
09:17 jith_ in case of distribute-replica type with 2 replica i need 4 servers right??
09:17 fedele joined #gluster
09:18 fedele Hello
09:18 glusterbot fedele: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:19 fedele I would enable quota on a link that points to a direc tory in the same glusterfs, is this possible?
09:25 jiffin jith_: u need atleast 4 servers
09:29 jiffin fedele: should be fine I guess, u can check with Manikandan
09:29 jiffin but he just left from irc :(
09:29 gem joined #gluster
09:30 fedele ok, I will try.
09:30 fedele thnk you
09:31 Manikandan joined #gluster
09:36 Klas jith_: 4, 6, 8 etc, any 2*n works
09:42 derjohn_mob joined #gluster
09:51 Manikandan joined #gluster
09:54 rastar joined #gluster
10:02 jith_ Klas. thanks
10:07 jith_ Klas, it is better to have lvm rather than direct disk right? so that storage is expanded later
10:12 atalur joined #gluster
10:13 yalu joined #gluster
10:22 cloph jith_: you don't need 4 *servers* but (at least) 4 bricks
10:22 cloph but of course makes more sense if you can actually split across multiple servers
10:23 cloph if you have replica pairs on the very same server, gluster will complain/you'd have to force it to create the volume.
10:25 karthik_ joined #gluster
10:25 jith_ cloph: i can have two servers with four bricks?? first server will be the data and the  other will be its replica??
10:27 cloph jith_: with only two servers you cannot maintain server-quorum. if you don't have at least three, then you cannot tell whether a host is down or just not reachable.
10:27 jith_ cloph: thanks.. with replica 2?
10:28 jith_ cloph: is it mandatory to use lvm based bricks?? or can i directly attach one disk(sdb) to server and continue with mkfs.xfs /dev/sdb1??
10:28 cloph but yes, you can have the four bricks on two servers, just need to use server1:/path/brick1 server2:/path/brick1 server1:/path/brick2 server2:/path/brick2
10:29 cloph depending on the usecase and the underlying disk structure, I'd go so far and say: avoid lvm if not neccessary.
10:29 cloph for large raid (20 disks, 2 spares) it had very negative impact on performance compared to just running on the raid10 directly.
10:30 aravindavk joined #gluster
10:30 partner i've done pretty much all the combinations and i still don't know what is the best approach :)
10:30 cloph and with only replica 2, you'll also have client side quorum problems, similar when only having two servers, but in this case the first brick would be able to continue..
10:31 cloph jith_: make sure to use large inode-size on the mkfs.xfs call (to be able to store the extended attributes in the same inode)
10:31 partner this is one of the volumes: Number of Bricks: 64 x 3 = 192
10:31 ira joined #gluster
10:32 jith_ cloph: thanks a lot
10:33 cloph and it is also not like "server 1 is data, server 2 is replica" - they both will be active/can act as source for modifications.
10:33 partner yeah that's a good point, writes go to both simultaneously
10:33 cloph if you want one master, and one slave, you might want geo-replication instead (but then it wouldn't fit your distributed-replicated question :-)
10:34 jith_ cloph: thanks, if lvm is not there, is it possible to expand the storage size?? i can only add more bricks right?..
10:35 cloph or expand a underlying raid and resize the filesystem... but yes, otherwise you'd have to add more bricks.
10:36 jith_ yes i am planning to have synchronous replication in local and asynchronous geo replication.. total replica will be three... as of now i will have local replication setup for HA and is it possible to add georeplication later?
10:37 cloph you won't have HA with only two bricks or replica 2
10:38 cloph s/two bricks/two servers/
10:38 glusterbot What cloph meant to say was: you won't have HA with only two servers or replica 2
10:38 jith_ why?
10:39 cloph because when one of your servers goes down, your volume will be read-only.
10:40 jith_ i am having another replicated server??
10:40 jith_ is it mandatory to have three replication for HA??
10:40 msvbhat joined #gluster
10:41 Klas not for read, but for write, yes
10:41 cloph you cannot use geo-replication server for the volume, the geo-replicated volume is a separate volume that is only filled asynchronously.
10:41 Klas you need a majority deciding if the share is ok or not
10:41 cloph you could use the geo-replication *server* as another peer in the trusted pool, so you can maintain server quorum
10:42 Klas wouldn't it need to be an arbiter then?
10:42 cloph but for the volumes themselves, you need at least replica 3 or 2 with arbiter to maintain client quorum.
10:42 Klas ah, now I understand, nm
10:43 itisravi It is not *mandatory* to have replica 3 or arbiter for HA. Replica-2 works fine too. It is just that network disconnects may lead to split-brain of files.
10:43 cloph on replica 2, if the first brick goes down, your volume will be r/o.
10:43 jith_ thanks
10:43 cloph Nothing I'd call HA.
10:44 itisravi cloph: only if you enable client-quorum.
10:44 cloph ok, if you like cleaning up splitbrain, you can ignore quorum, true..
10:44 itisravi right
10:45 jith_ cloph: thanks, if i am using lvm, i should use lvm for all servers or we can use a combination of lvm bricks and local disk bricks as well???
10:45 jith_ itisravi, thanks
10:46 B21956 joined #gluster
10:46 cloph jith_: you can mix as you will, you could also mix filesystems on the different bricks (but just because you can, doesn't mean you should :-P)
10:46 cloph gluster only cares about the brick directory and the ability to set extended attributes on the files/dirs.
10:47 cloph gluster  itself doesn't have an understanding whether it runs from a partition, raid or lvm
10:47 cloph so whether you should use lvm for all servers depend on how you plan to expand later on.
10:48 cloph if lvm is the only way to grow the space for data in a meaningful way, it surely is an option to consider.
10:48 jith_ ok thanks.. also i read like all the brick size should be same.... then if i expand one lvm brick wont it create problem?
10:49 cloph they don't need to be same size, but of course the smallest determines the capacity of the whole volume
10:49 cloph if you expand a brick to 12TB, but the other brick of the replica pair can only store 2TB, then your volume can only store 2TB
10:50 cloph (calls it pairs, but of course can also be triplets,..)
10:55 jith_ cloph: ok i understood, so there is no rule like all brick size should be same.. thanks
10:56 Klas cloph: what happens when the smaller disk is full then?
10:56 cloph writes will fail because no more diskspace in the volume
10:57 cloph so if the smaller disk is full, you can replace the small brick with a larger one or grow the filesystem of the small brick with one way or the other.
10:57 jith_ with the two bricks of one disk i can use for two different volumes? if one volume occupies the full space..  other volume cant write in its brick
10:58 cloph (either expanding a lvm or adding devices to a raid10 or similar)
10:58 cloph you can have bricks of different volumes in the same filesystem.
10:58 cloph And yes, if the disk-space is exceeded, then both volumes will be out of space.
11:00 shdeng joined #gluster
11:01 jith_ cloph, thanks
11:07 B21956 joined #gluster
11:13 devyani7 joined #gluster
11:16 hchiramm joined #gluster
11:21 aravindavk joined #gluster
11:26 aravindavk joined #gluster
11:40 bkunal joined #gluster
11:51 robb_nl joined #gluster
12:03 ilbot3 joined #gluster
12:03 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
12:04 om joined #gluster
12:06 om2 joined #gluster
12:11 devyani7 joined #gluster
12:18 shyam joined #gluster
12:26 ankitraj joined #gluster
12:32 ju5t post-factum: as it turns out replace brick isn't something we can use as we have a striped replicate volume
12:32 post-factum ju5t: yup, striped volumes are unflexible and obsoleted
12:33 ju5t is it at all possible to replace a brick in a setup with striped volumes?
12:45 jri joined #gluster
12:48 itisravi joined #gluster
12:48 rastar joined #gluster
12:57 unclemarc joined #gluster
13:00 om joined #gluster
13:01 ahino1 joined #gluster
13:06 plarsen joined #gluster
13:08 dlambrig joined #gluster
13:13 prasanth joined #gluster
13:22 atinm joined #gluster
13:29 jith_ in /etc/host i should give the ip with hostname.. i read like even 127.0.0.1 and 127.0.1.1 should be commented out.. is it so
13:36 nbalacha joined #gluster
13:42 dnunez joined #gluster
13:43 rastar joined #gluster
13:46 baojg joined #gluster
13:46 skoduri joined #gluster
13:47 squizzi joined #gluster
13:48 ahino joined #gluster
13:49 jith_ ?
13:49 jith_ cloph: ?
13:49 jith_ left #gluster
13:51 skylar joined #gluster
13:55 devyani7 joined #gluster
13:56 masber joined #gluster
14:09 prasanth joined #gluster
14:11 karnan joined #gluster
14:17 arcolife joined #gluster
14:18 baojg joined #gluster
14:19 shubhendu joined #gluster
14:23 harish joined #gluster
14:23 shyam joined #gluster
14:38 atinm joined #gluster
14:42 shyam joined #gluster
14:46 ivan_rossi left #gluster
15:04 wushudoin joined #gluster
15:05 wushudoin joined #gluster
15:06 ankitraj joined #gluster
15:08 derjohn_mob joined #gluster
15:11 devyani7 joined #gluster
15:13 ahino1 joined #gluster
15:39 arcolife joined #gluster
15:43 jith_ joined #gluster
15:44 ZachLanich joined #gluster
15:44 jith_ hi all, is it possible to change the volume type from replica to distribute replica?
15:46 hchiramm joined #gluster
15:58 jith_ if i created a volume with replica 2 initially with two servers.. and if i expand it by adding two more bricks will it be a distribute-replicate volume type?
16:03 post-factum jith_: yes
16:03 jith_ post-factum: thanks
16:08 Pupeno joined #gluster
16:14 bluenemo joined #gluster
16:22 ic0n_ joined #gluster
16:23 Gambit15 joined #gluster
16:23 ic0n_ joined #gluster
16:28 muneerse2 joined #gluster
16:28 jiffin joined #gluster
16:28 jith_ post-factum, in /etc/hosts i should give the ip with hostname.. i read like even 127.0.0.1 and 127.0.1.1 should be commented out.. is it so?
16:29 cloph no, don't remove localhost definition
16:30 post-factum ^^
16:30 post-factum localhost is a must
16:31 kovshenin joined #gluster
16:31 jith_ ok thanks...
16:31 kkeithley use the FQ hostname or IP.  Don't use "localhost", and don't use "127.0.0.1"
16:32 kpease joined #gluster
16:33 jith_ yes i got some issues with localhost.. when i check the gluster peer status, it was showing with localhost in some place and 127.0.0.1 in some other place in the status.. when i add the peers after commenting the 127.0.0.1 localhost line it was fine
16:34 jith_ kkeithley, in /etc/hosts file u r saying??
16:35 msvbhat joined #gluster
16:35 kkeithley sorry if I was confusing. I meant when you create a volume (gluster volume create ..."  use FQ hostnames or IP addresses.
16:36 kkeithley If you use hostnames, they must be either in DNS or /etc/hosts
16:36 jith_ yes thanks
16:36 kkeithley as post-factum said, don't comment out the localhost line in /etc/hosts
16:36 kkeithley that will break _everything_
16:36 ppai joined #gluster
16:37 jith_ post-factum, cloph, kkeithley, thanks
16:49 d0nn1e joined #gluster
16:49 jiffin joined #gluster
16:56 post-factum np
16:58 jiffin joined #gluster
16:59 rafi joined #gluster
17:01 mhulsman joined #gluster
17:01 aravindavk joined #gluster
17:13 R0ok_ joined #gluster
17:18 atinm joined #gluster
17:22 d4n13L joined #gluster
17:30 rafi joined #gluster
17:36 devyani7 joined #gluster
17:42 Manikandan joined #gluster
17:43 msvbhat joined #gluster
17:46 ahino joined #gluster
17:49 ashiq joined #gluster
17:50 squizzi joined #gluster
17:51 jith_ while checking gluster volume status, i am getting Self-heal Daemon port as    N/A , but Online is Y , and showing some pid number
17:51 kovshenin joined #gluster
18:00 JoeJulian jith_: 3.5?
18:01 JoeJulian Oh, nm.
18:01 JoeJulian Yeah, the self-heal daemon port *is* not applicable.
18:07 ju5t joined #gluster
18:09 ashiq_ joined #gluster
18:09 Manikandan_ joined #gluster
18:12 Philambdo joined #gluster
18:31 squizzi joined #gluster
18:32 Philambdo joined #gluster
18:36 squizzi_ joined #gluster
18:40 rastar joined #gluster
18:44 prth joined #gluster
18:51 tg2 joined #gluster
18:51 jith_ joined #gluster
18:52 kenansulayman joined #gluster
18:53 legreffier joined #gluster
18:54 sandersr joined #gluster
18:56 prth_ joined #gluster
18:56 Iouns joined #gluster
18:56 xMopxShell joined #gluster
18:56 Champi joined #gluster
18:57 inodb joined #gluster
19:05 devyani7 joined #gluster
19:05 skylar joined #gluster
19:14 ackjewt joined #gluster
19:20 ten10 joined #gluster
19:22 LiftedKilt joined #gluster
19:22 jkroon joined #gluster
19:23 LiftedKilt Looking at using glusterfs as a persistent storage backend for docker swarm containers, and I have a quick question. the GlusterFS plugin that is linked to from the docker volume plugin page marked unmaintained on the Github repo (https://github.com/calaver​a/docker-volume-glusterfs)
19:23 glusterbot Title: GitHub - calavera/docker-volume-glusterfs: [UNMAINTAINED] Volume plugin to use GlusterFS as distributed data storage (at github.com)
19:23 LiftedKilt is there a preferred fork that I should be looking at?
19:27 JoeJulian LiftedKilt: Never heard of that tool before now, sorry. Maybe this is useful? http://blog.gluster.org/category/docker/
19:29 ten10 i read the iscsi glusterfs doc but just wanted to confirm that if I had 2 linux servers I could have iscsi running on both using iscsi file based backing stores and they would appear on both iscsi target servers?
19:30 jri joined #gluster
19:33 ten10 right now if I have to take down my iscsi server I'm kind of hosed
19:34 LiftedKilt JoeJulian: So you aren't aware of any supported methods for using glusterfs as a docker volume plugin?
19:36 JoeJulian I haven't heard of anything, but I'm not a big docker fan.
19:38 squizzi joined #gluster
19:39 LiftedKilt JoeJulian: hmmm ok. Thanks
19:43 squeakyneb joined #gluster
19:46 squizzi joined #gluster
20:17 post-factum JoeJulian++ for not being docker fan
20:17 glusterbot post-factum: JoeJulian's karma is now 31
20:23 ben453 joined #gluster
20:27 squizzi joined #gluster
20:47 Pupeno joined #gluster
20:49 dkalleg joined #gluster
20:52 dkalleg Hi, my /var/log/glusterfs/etc-glusterfs-glusterd.vol.log file and other gluster log files always created with permissions 0600, but I'm interested in changing it to 0640 in some config.  I haven't seen any gluster config for this, and I've tried tweaking the umask for gluster in upstart, but no dice. Any advice?
20:53 dkalleg I'm not interested in changing the permissions at runtime either.
20:55 congpine joined #gluster
20:57 congpine Hi, i'm trying to enable some options for Gluster volume but hit with this error
20:57 congpine volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again
20:58 congpine as far as I can see, all clients use the same glusterfs-client version number. What is the best way to identify and isolate these clients ?
21:13 squizzi joined #gluster
21:17 jri joined #gluster
21:22 JoeJulian congpine: most likely, some client hasn't had their mount remounted since you upgraded. I usually check my hosts for open deleted libraries regularly, "lsof | grep deleted".
21:32 congpine I didn't upgrade gluster server. where do I run lsof ? on the host that produced the above error or on all clients ?
21:34 kovshenin joined #gluster
21:34 JoeJulian Sounds like a client that has the volume mounted is the one with the problem.
21:35 JoeJulian According to "One or more connected clients..."
21:43 jkroon joined #gluster
21:45 om joined #gluster
21:47 squizzi joined #gluster
21:53 kovshenin joined #gluster
21:59 congpine i found 2 suspected clients by grep logs from the gluster server, unmount the volume but still couldn't change the setting ( I made the setting few months ago)
21:59 congpine accepted client from xxx (version: 3.5.2)
21:59 congpine i'm running gluster 3.5.4
22:00 congpine I checked glusterfs-client on those servers, they are running 3.5.4 . i'm not sure what version 3.5.2 refers to
22:17 hackman joined #gluster
22:53 sonicrose joined #gluster
22:54 sonicrose hi all.. is there any place to download RPMs for gluster 3.8 for EL5 x86_64  ?? can only find 6 ane 7
23:02 sonicrose also, i’m building a new volume that will store VHD’s for VMs being accessed over NFS… any particular volume options I should set to make the VM’s run best?
23:02 sonicrose can I use shards?
23:07 muneerse joined #gluster
23:31 plarsen joined #gluster
23:40 dkalleg Any ideas on my gluster log question?  Trying to have gluster logs have 0640 permissions instead of 0600

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary