Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 Javezim Hey @JoeJulian, You wrote the following for me yesterday regarding Directories showing split brain
00:06 Javezim for d in $(find $brick_root -type d | xargs getfattr -m trusted.afr. | egrep -v '^#|^$' | sort -u); do find $brick_root -type d -exec setfattr -x $d {} \; done
00:06 Javezim However when we try it, the result comes back as its wanting more output -  >
00:06 Javezim Any idea what's causing that?
00:08 MugginsM joined #gluster
00:24 MugginsM joined #gluster
00:29 gbox Javezim: I hope you figured this out but there's a missing semicolon at the end there before "done"
01:09 EinstCrazy joined #gluster
01:11 dlambrig_ joined #gluster
01:22 Javezim Thanks @gbox!
01:31 Javezim Okay we're receiving the following when trying - gluster volume heal gv0mel split-brain bigger-file
01:31 Javezim Lookup failed on <FILE> Stale file handle
01:31 Javezim Anyone know what this is?
01:36 rastar joined #gluster
01:46 dlambrig_ joined #gluster
01:50 EinstCra_ joined #gluster
02:00 harish joined #gluster
02:04 nangthang joined #gluster
02:22 syadnom JoeJulian, if I do multiple bricks on one server, does gluster have any way to chose writes (with replicas>=2) on a different box?
02:23 JoeJulian @brick order
02:23 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
02:23 syadnom for instance, it's less than optimal to have 2 replicas on the same box if that box fails...
02:23 syadnom brick order?
02:24 syadnom so just put the bricks in the order I'd like the ~round robin hash to write to them?
02:24 syadnom ie, server1:/brick1 server2:/brick1 server3:/brick1 server1:/brick2 server2:/brick2 and so on?
02:24 JoeJulian I guess that's one way of saying it, yeah.
02:25 syadnom can I re-order those at any point?  say I add a brick to a server, or add a server?
02:27 syadnom I found this: https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
02:27 glusterbot Title: How to expand GlusterFS replicated clusters by one server (at joejulian.name)
02:27 syadnom less than ideal...
02:28 syadnom would be WAY nicer to 'move' a bricks position in the list, or specify a new order and then just rebalance.
02:30 JoeJulian Well you *could* use replace-brick commit force, but then you would have a self-heal period where data was no longer replicated.
02:31 syadnom JoeJulian, maybe I'm trying to make gluster do something that isn't right for it.... but gluster is so easy compared to some other solutions.  I want to basically, dynamically add storage that is fault tollerant to the network.  I have some multi-bay boxes (aspire h340) and some bananapi w/ sata ports etc.
02:34 MugginsM joined #gluster
02:38 MugginsM joined #gluster
02:38 kdhananjay joined #gluster
02:51 vmallika joined #gluster
02:53 Lee1092 joined #gluster
02:57 hagarth joined #gluster
03:17 ahino joined #gluster
03:19 overclk joined #gluster
03:21 rideh joined #gluster
03:25 Saravanakmr joined #gluster
03:35 overclk joined #gluster
03:39 nehar joined #gluster
03:41 jobewan joined #gluster
03:45 syadnom thoughts on glusterfs as a datastore for a Windows Server2012r2 machine?  Need a replicated network store, not interested in high dollar SANs and gluster seems a good fit if I can get the connectivity right.  Highly prefer to not have a single point of failure, but no glusterfs client on windows :(
03:47 syadnom oh, and ideally it shouldn't tank performance... really want to see 100Mbps of write performance with as much as 1Gbit in the future because I can do cheap 10Gbit NICS
03:48 skoduri|afk joined #gluster
03:48 JoeJulian You can set up ctdb and/or nfs-ganesha HA.
03:49 syadnom JoeJulian, is NFS client on Windows2012r2 reliable ?
03:49 JoeJulian I haven't used windows in years.
03:50 syadnom I'm working on a client backup program that runs on Windows Server.  I'm primarily Linux myself
03:50 dlambrig_ joined #gluster
03:51 syadnom JoeJulian, is NFS-Genasha easy to setup with Gluster?
03:52 skoduri|afk syadnom, if you do not want HA, its pretty straightforward.
03:52 skoduri|afk syadnom, you can refer http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/
03:52 glusterbot Title: Linux scale out NFSv4 using NFS-Ganesha and GlusterFS — one step at a time | Gluster Community Website (at blog.gluster.org)
03:54 * syadnom reading
03:58 RameshN joined #gluster
04:01 nishanth joined #gluster
04:06 syadnom skoduri, have you used it with a nfsv4.1 client?
04:07 syadnom I'm wondering how it might work with cohortfs' nfsv4.1 pnfs client for windows...
04:09 skoduri syadnom, yes...nfsv4.1 & pNFS was tested
04:09 skoduri syadnom, though we never tested using windows client
04:09 russoisraeli joined #gluster
04:09 syadnom skoduri, is that with the HA setup above?
04:10 skoduri syadnom, for pNFS we have a limitation at the moment that you cannot use HA as HA setup is limited to few nodes whereas pNFS would need NFS server to be running on all the nodes
04:11 skoduri syadnom, http://gluster.readthedocs.org/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Intergration/#configuring-gluster-volume-for-pnfs
04:11 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.org)
04:11 syadnom skoduri, ok, so there has to be something to coordinate the pnfs nodes ....
04:11 skoduri syadnom, yes we use Upcall infrastructure to sync MDS while the data is being written to DS
04:12 syadnom I don't know if you read back, but I'm looking for a good replicated file store for a backup solution.  trying to avoid single-point-of-failure or bottleneck.
04:13 skoduri syadnom, okay...do you need to use pNFS or can go with NFSv3/NFSv4?
04:13 syadnom I'm guessing that with pnfs on more nodes than there are replicas, each node's gluster client is going to have to read from other nodes when it's NFS interface (or nfs-ganesh) is hit...
04:14 syadnom I don't want to have a single point of failure in the storage system.
04:14 syadnom without doing some virtual ip sharing and having a single node handle file access...
04:14 skoduri syadnom, ohh .... then you would need glusterfs fuse client IMO
04:15 syadnom skoduri, backup server software runs on windows.
04:15 syadnom ie, no gluster client
04:15 skoduri syadnom, all protocol(NFS/SMB) access happens via mount IP ..and if that server is down, your mount point will be inaccessible unless you use virtual IP to failover
04:16 syadnom isn't pnfs supposed to handle that by caching server addresses and disqualifying inaccessible servers?
04:16 syadnom as long as the metadata server is up?
04:17 syadnom I thought about running a VM on the windows server to have a gluster client and exporting from there...  not sure how ugly performance might be with that though.
04:17 skoduri syadnom, yes we can do so...
04:18 syadnom a vm running linux I mean....
04:18 overclk joined #gluster
04:18 Javezim Anyone ever received Lookup failed on <FILE> Stale file handle when trying to access a folder/file in Gluster?
04:19 skoduri syadnom, I will check code and confirm..but I guess we do send list of DS server addresses for pNFS client to select from
04:19 skoduri syadnom, I will be afk for some time..could you drop a mail to gluster-users...shall reply to it
04:19 syadnom ok, another question.  looks like gluster v3.6+ supports dis-similar brick sizes?  I found that in the mailing list archives... but don't really see it in features..
04:20 syadnom skoduri, how about a windows gluster client ;)
04:20 syadnom left #gluster
04:20 syadnom joined #gluster
04:23 poornimag joined #gluster
04:34 aravindavk joined #gluster
04:57 gem joined #gluster
04:57 shubhendu joined #gluster
04:58 rafi joined #gluster
05:02 Manikandan_ joined #gluster
05:05 ashiq joined #gluster
05:08 ndarshan joined #gluster
05:10 nbalacha joined #gluster
05:15 karthik___ joined #gluster
05:16 john51 joined #gluster
05:16 MugginsM joined #gluster
05:17 hgowtham joined #gluster
05:17 Bhaskarakiran joined #gluster
05:19 ramteid joined #gluster
05:30 spalai joined #gluster
05:33 aspandey joined #gluster
05:35 hchiramm joined #gluster
05:38 Apeksha joined #gluster
05:40 nishanth joined #gluster
05:45 Manikandan joined #gluster
06:01 ppai joined #gluster
06:04 skoduri syadnom, there?
06:04 ramky joined #gluster
06:10 mhulsman joined #gluster
06:11 spalai joined #gluster
06:18 armyriad joined #gluster
06:20 nathwill joined #gluster
06:22 pur__ joined #gluster
06:24 jtux joined #gluster
06:31 Javezim Anyone ever received ---  Lookup failed on <FILE> Stale file handle --- when trying to access a folder/file in Gluster? Any idea what it is or how to fix?
06:31 glusterbot Javezim: -'s karma is now -357
06:31 glusterbot Javezim: -'s karma is now -358
06:39 Wizek joined #gluster
06:42 vmallika joined #gluster
06:43 The_Pugilist joined #gluster
06:56 kdhananjay1 joined #gluster
06:58 _gbox Javezim:  That seems NFS related.  Are you using the NFS client?
06:58 kdhananjay joined #gluster
06:59 Javezim @_Gbox No we use Fuse-Mount to share it out
07:01 gbox Javezim: OK googling this shows PranithK helping someone with that problem: https://www.gluster.org/pipermail/gluster-users/2014-March/016458.html
07:01 glusterbot Title: [Gluster-users] Stale file handle with FUSE client (at www.gluster.org)
07:02 gbox Javezim: remount with 'use-readdirp=no'?  I'm not sure what they mean by dropping caches.
07:03 anil_ joined #gluster
07:03 gbox Javezim: Interesting bit of Linux I didn't know: sync ; echo 3 > /proc/sys/vm/drop_caches
07:05 gbox Javezim: Better read up before messing with the caches though: http://www.linuxinsight.com/proc_sys_vm_drop_caches.html
07:05 glusterbot Title: drop_caches | LinuxInsight (at www.linuxinsight.com)
07:07 kovshenin joined #gluster
07:08 Wizek joined #gluster
07:11 jri joined #gluster
07:19 fsimonce joined #gluster
07:23 hackman joined #gluster
07:23 The_Pugilist joined #gluster
07:23 ctria joined #gluster
07:28 Javezim Anyone seen the following? When Running - for d in $(find $brick_root -type d | xargs getfattr -m trusted.afr. | egrep -v '^#|^$' | sort -u); do find $brick_root -type d -exec setfattr -x $d {} \;; done
07:29 Javezim Folder path receives: setfattr: ./vault/Folder/Folder/Folder/Folder/File: No such attribute
07:32 Saravanakmr joined #gluster
07:37 [Enrico] joined #gluster
07:43 [Enrico] joined #gluster
07:44 ivan_rossi joined #gluster
07:44 robb_nl joined #gluster
07:56 devilspgd joined #gluster
08:02 karthik___ joined #gluster
08:10 rastar joined #gluster
08:17 Slashman joined #gluster
08:25 paul98 joined #gluster
08:30 paul98_ joined #gluster
08:34 [Enrico] joined #gluster
08:34 arcolife joined #gluster
08:46 kassav joined #gluster
09:00 EinstCrazy joined #gluster
09:10 Bhaskarakiran joined #gluster
09:17 ppai joined #gluster
09:18 jwd joined #gluster
09:35 kassav hello guys
09:35 kassav anyone using proxmox with glusterFS?
09:36 post-factum kassav: you could find more help with more specific question asked
09:37 kassav post-factum: i want implement distributed storage with proxmox, need to choose gluster or ceph
09:37 kassav i looking for previous use cases
09:38 post-factum kassav: do you need a storage for VM images?
09:39 kassav yes and for all my proxmox infrastructure
09:39 _nex_ joined #gluster
09:40 post-factum you could use both solutions, but I'd stick to ceph rbd for that
09:41 kassav why you prefer ceph?
09:42 post-factum i had negative experience with HA for gluster vm storage, but that was long time ago. however, i've switched to ceph then, and now it just works
09:42 post-factum i use glusterfs extensively for file shares, though
09:43 post-factum (also i like ceph architecture more, but that is IMO)
09:43 kassav did you encounter any problem with ceph?
09:43 post-factum not with RBD
09:43 post-factum with FS only
09:44 post-factum btw, they've released Jewel with FS declared stable, so you could also try that, if you need FS
09:45 post-factum i have Hammer in production
09:46 kassav post-factum: could you present your global architecture of proxmox clusters?
09:47 EinstCrazy joined #gluster
09:47 post-factum we do not use proxmox :)
09:48 kassav post-factum: what is the usage so
09:53 Bhaskarakiran joined #gluster
09:53 post-factum libvirt
09:57 EinstCrazy joined #gluster
09:58 kassav post-factum: you have a commercial solution?
09:58 post-factum nope
09:58 karnan joined #gluster
09:58 post-factum centos + lots of enthusiams
09:58 post-factum *enthusiasm
10:02 DV_ joined #gluster
10:05 v12aml joined #gluster
10:08 shubhendu joined #gluster
10:08 kassav post-factum: no commercial purpose?
10:13 mhulsman1 joined #gluster
10:15 ppai joined #gluster
10:15 post-factum it is internal cloud that hosts almost all our services
10:17 EinstCrazy joined #gluster
10:21 jri_ joined #gluster
10:24 natarej joined #gluster
10:26 The_Pugilist joined #gluster
10:37 robb_nl joined #gluster
10:39 shubhendu joined #gluster
10:40 robb_nl joined #gluster
10:43 caitnop joined #gluster
11:00 johnmilton joined #gluster
11:02 Manikandan joined #gluster
11:22 amye joined #gluster
11:22 russoisraeli joined #gluster
11:24 DV_ joined #gluster
11:33 djmentos joined #gluster
11:33 djmentos hi there
11:34 djmentos i created simple replica, 2 hosts - 10.54.128.80 && .86, mount on fist host: mount.glusterfs 10.54.128.80:/gv0 /mnt/mount-test and on second: mount.glusterfs 10.54.128.86:/gv0 /mnt/mount-test
11:34 djmentos files doesn't sync :<
11:35 Debloper joined #gluster
11:38 post-factum djmentos: gluster peer status
11:39 post-factum djmentos: gluster volume info gv0
11:39 post-factum djmentos: gluster volume status gv0
11:40 nishanth joined #gluster
11:40 post-factum @paste
11:40 glusterbot post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
11:42 djmentos http://pastebin.com/nN7nnH8Y
11:42 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
11:43 djmentos https://paste.fedoraproject.org/358535/46132537/
11:43 glusterbot Title: #358535 Fedora Project Pastebin (at paste.fedoraproject.org)
11:43 post-factum djmentos: so what exactly do you do after mounting the volume?
11:45 djmentos touch test
11:45 djmentos and it's not visible on second node
11:48 post-factum do you create new file under mountpoint?
11:48 djmentos yes
11:48 post-factum ok, then, you need to take a look at logs
11:49 post-factum also, you need to check your firewall
11:50 djmentos fw is disabled, machines are in the same subnet, I will check logs
11:51 post-factum first, check client logs
11:51 post-factum under /var/log/glusterfs/mnt-mount-test.log, i guess
11:52 unclemarc joined #gluster
12:06 djmentos ok, I found solution... it was stupid
12:07 djmentos I mounted gfs while I was in mountpoint
12:07 djmentos I needed to go .. and then again there :D
12:08 djmentos thx
12:13 arcolife joined #gluster
12:18 post-factum :D
12:19 post-factum that is why i was asking stupid questions
12:19 djmentos no problem :D
12:23 ahino joined #gluster
12:27 bluenemo joined #gluster
12:28 bennyturns joined #gluster
12:30 Romeor joined #gluster
12:33 jwd joined #gluster
12:42 julim joined #gluster
13:00 amye joined #gluster
13:05 gem joined #gluster
13:16 DV__ joined #gluster
13:16 EinstCrazy joined #gluster
13:21 shyam joined #gluster
13:24 EinstCrazy joined #gluster
13:28 ahino joined #gluster
13:49 hybrid512 joined #gluster
13:50 mpietersen joined #gluster
13:51 post-factum joined #gluster
13:51 EinstCrazy joined #gluster
13:51 plarsen joined #gluster
13:54 DV_ joined #gluster
13:54 post-factum joined #gluster
13:57 bennyturns joined #gluster
14:01 pfactum joined #gluster
14:02 nbalacha joined #gluster
14:05 skylar joined #gluster
14:10 EinstCrazy joined #gluster
14:10 post-factum joined #gluster
14:15 Guest3901 joined #gluster
14:18 EinstCrazy joined #gluster
14:22 Bhaskarakiran joined #gluster
14:23 hackman joined #gluster
14:24 EinstCrazy joined #gluster
14:29 EinstCrazy joined #gluster
14:33 DV_ joined #gluster
14:33 mpietersen joined #gluster
14:34 EinstCra_ joined #gluster
14:35 kpease joined #gluster
14:43 Caveat4U joined #gluster
14:43 EinstCrazy joined #gluster
14:44 EinstCr__ joined #gluster
14:46 ahino joined #gluster
14:48 amye joined #gluster
14:49 nbalacha joined #gluster
14:55 EinstCrazy joined #gluster
15:02 EinstCrazy joined #gluster
15:04 shubhendu joined #gluster
15:04 squizzi joined #gluster
15:10 robb_nl joined #gluster
15:12 shubhendu_ joined #gluster
15:13 amye joined #gluster
15:26 netzapper joined #gluster
15:27 netzapper since gluster is implemented in user space, is it bound by ulimits? And, if so, what is supposed to happen when `glusterd` reaches the open files ulimit with files open on the backing filesystem?
15:31 Wizek joined #gluster
15:32 EinstCrazy joined #gluster
15:33 squizzi joined #gluster
15:39 bennyturns joined #gluster
15:52 johnmilton joined #gluster
15:53 wnlx joined #gluster
15:53 bennyturns joined #gluster
16:00 Pupeno joined #gluster
16:00 Pupeno joined #gluster
16:05 dgandhi joined #gluster
16:06 Saravanakmr joined #gluster
16:07 dgandhi joined #gluster
16:08 dgandhi joined #gluster
16:09 dgandhi joined #gluster
16:11 dgandhi joined #gluster
16:12 [o__o] joined #gluster
16:13 dgandhi joined #gluster
16:17 EinstCrazy joined #gluster
16:18 dlambrig_ joined #gluster
16:21 bennyturns joined #gluster
16:21 spalai joined #gluster
16:22 tom[] joined #gluster
16:24 squizzi joined #gluster
16:28 nathwill joined #gluster
16:32 ivan_rossi left #gluster
16:37 DV_ joined #gluster
16:45 johnmilton joined #gluster
16:55 DV_ joined #gluster
16:58 brandon_ joined #gluster
16:59 post-factum joined #gluster
17:10 DV_ joined #gluster
17:10 nishanth joined #gluster
17:10 DV_ joined #gluster
17:11 DV__ joined #gluster
17:13 skoduri joined #gluster
17:15 ic0n joined #gluster
17:19 necrogami joined #gluster
17:23 Telsin joined #gluster
17:24 amye joined #gluster
17:32 hagarth joined #gluster
17:48 jwd joined #gluster
18:06 F2Knight joined #gluster
18:06 skylar joined #gluster
18:07 shaunm joined #gluster
18:20 hagarth joined #gluster
18:22 Telsin left #gluster
18:24 squizzi joined #gluster
18:48 ahino joined #gluster
18:59 gem joined #gluster
19:11 Caveat4U joined #gluster
19:13 russoisraeli joined #gluster
19:15 rwheeler joined #gluster
19:19 amye joined #gluster
19:46 robb_nl joined #gluster
19:59 Telsin joined #gluster
20:02 jwd joined #gluster
20:14 dgandhi joined #gluster
20:17 Caveat4U left #gluster
20:28 edong23 joined #gluster
20:41 gem joined #gluster
21:02 amye joined #gluster
21:30 nathwill is it expected that using replace-brick on a single-brick cluster leaves all the old data behind?
21:31 nathwill it kind of makes sense, i guess you'd have to go to replica 2, then remove-brick back down to single replica on the old brick
21:32 nathwill just trying to rehearse a data migration, thought replace-brick replicated data *before* brick removal, but it seems to do it afterwards
21:40 foster joined #gluster
21:42 russoisraeli joined #gluster
21:49 chirino joined #gluster
22:20 JoeJulian If you replace-brick <blah blah blah> start, it should do what you're asking.
22:29 dlambrig_ joined #gluster
22:33 chirino joined #gluster
22:51 nathwill ah, ok. i had thought that was deprecated... i'll give it a shot, thanks!
22:51 Pupeno joined #gluster
23:03 brandon_ joined #gluster
23:04 amye joined #gluster
23:06 bennyturns joined #gluster
23:46 hagarth joined #gluster
23:51 bennyturns joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary