Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 major fun .. managed to find a few bugs in the code ..
00:13 major my bugs even
00:15 misc is that related to whiskey ?
00:15 major I wish
00:15 major haven't had any at the desk in like .. 4 weeks :(
00:16 major no .. just some fun in the added complexity of btr's faked superblock insanity
00:19 major but .. I think I am done tweaking this Vagrantfile
01:13 musinsky joined #gluster
01:17 shdeng joined #gluster
01:23 ahino joined #gluster
01:36 musinsky hey all, im looking into gluster as a distributed filesystem to be used in a bunch of sites across the country
01:37 musinsky my understanding is that geo replication is just replication and you wouldnt be writing to the replicas
01:38 musinsky so i guess my question is, how well does a typical synchronous cluster deal with latency? and is that the best way to go for active/active sites across the country?
01:54 Tanner__ joined #gluster
01:59 derjohn_mob joined #gluster
02:09 susant joined #gluster
02:10 rafi joined #gluster
02:10 susant left #gluster
02:45 prasanth joined #gluster
02:47 aravindavk joined #gluster
03:04 Gambit15 joined #gluster
03:28 aravindavk joined #gluster
03:31 magrawal joined #gluster
03:34 riyas joined #gluster
03:42 ashiq joined #gluster
03:53 atinm joined #gluster
03:58 itisravi joined #gluster
04:01 _ndevos joined #gluster
04:01 _ndevos joined #gluster
04:05 gyadav joined #gluster
04:07 jiffin joined #gluster
04:14 dominicpg joined #gluster
04:43 itisravi joined #gluster
04:45 Shu6h3ndu joined #gluster
04:49 nbalacha joined #gluster
04:49 ankitr joined #gluster
04:55 sona joined #gluster
04:57 buvanesh_kumar joined #gluster
05:05 rafi joined #gluster
05:10 nbalacha joined #gluster
05:16 ppai joined #gluster
05:18 gyadav joined #gluster
05:18 Prasad joined #gluster
05:20 nbalacha_ joined #gluster
05:22 ndarshan joined #gluster
05:22 R0ok_ joined #gluster
05:24 buvanesh_kumar joined #gluster
05:25 buvanesh_kumar joined #gluster
05:27 skumar joined #gluster
05:35 amarts joined #gluster
05:35 nishanth joined #gluster
05:37 Philambdo joined #gluster
05:44 sanoj joined #gluster
05:46 sona joined #gluster
05:49 msvbhat joined #gluster
05:51 susant joined #gluster
05:52 kotreshhr joined #gluster
05:54 Saravanakmr joined #gluster
05:56 buvanesh_kumar joined #gluster
05:58 mb_ joined #gluster
05:58 apandey joined #gluster
06:00 skumar_ joined #gluster
06:09 msvbhat joined #gluster
06:10 rafi joined #gluster
06:15 amarts joined #gluster
06:15 skoduri joined #gluster
06:20 ppai joined #gluster
06:21 sbulage joined #gluster
06:23 TBlaar joined #gluster
06:25 hgowtham joined #gluster
06:26 derjohn_mob joined #gluster
06:28 jtux joined #gluster
06:29 jtux left #gluster
06:29 Karan joined #gluster
06:30 atinm joined #gluster
06:33 aravindavk joined #gluster
06:39 ayaz joined #gluster
06:41 jkroon joined #gluster
06:47 jiffin1 joined #gluster
06:53 cvstealt1 joined #gluster
06:55 jarbod joined #gluster
06:55 jarbod_ joined #gluster
06:55 nixpanic joined #gluster
06:55 rwheeler joined #gluster
06:55 sage_ joined #gluster
06:56 nixpanic joined #gluster
06:56 misc joined #gluster
06:56 d-fence_ joined #gluster
06:57 melliott joined #gluster
06:57 csaba joined #gluster
06:57 edong23 joined #gluster
06:57 john51 joined #gluster
06:58 yoavz joined #gluster
07:02 buvanesh_kumar joined #gluster
07:10 jiffin joined #gluster
07:12 mbukatov joined #gluster
07:12 ayaz joined #gluster
07:14 msvbhat joined #gluster
07:15 itisravi joined #gluster
07:16 apandey joined #gluster
07:19 martinetd joined #gluster
07:20 Limebyte joined #gluster
07:20 ankitr joined #gluster
07:22 shortdudey123 joined #gluster
07:23 skumar__ joined #gluster
07:26 fsimonce joined #gluster
07:29 ankitr joined #gluster
07:37 rafi joined #gluster
07:46 amarts joined #gluster
07:53 ppai joined #gluster
07:53 buvanesh_kumar joined #gluster
07:54 nbalacha_ joined #gluster
07:56 skumar joined #gluster
07:56 atinm joined #gluster
08:00 flying joined #gluster
08:06 ankitr joined #gluster
08:07 armyriad joined #gluster
08:11 panina joined #gluster
08:13 nbalacha_ joined #gluster
08:17 Philambdo joined #gluster
08:18 ahino joined #gluster
08:20 nbalacha joined #gluster
08:24 jiffin joined #gluster
08:46 ankitr joined #gluster
08:48 sanoj joined #gluster
08:55 jwd joined #gluster
08:58 jiffin joined #gluster
09:00 MrAbaddon joined #gluster
09:01 vincenzoml joined #gluster
09:02 vincenzoml hi there, I'm trying to setup glusterfs for a single home server, and client on my laptop. I'm using ubuntu 17.04 as the server and trying to set it up using existing guides, but something differs
09:03 vincenzoml I've installed the server, and started the service in init.d, then I run "gluster peer status" and it says "number of peers: 0". Guides state it would be 1
09:04 ayaz joined #gluster
09:04 vincenzoml "gluster peer probe localhost" says that I don't need to do that. I've allocated a hostname on ddns.net, and forwarded ports to my host, but "gluster peer probe MYHOSTNAME" says " either already part of another cluster or having volumes configured"
09:04 vincenzoml don't know how to proceed :)
09:04 vincenzoml I didn't do any other step than the ones mentioned above
09:05 jiffin vincenzoml: localhost by default will be part of cluster
09:05 jiffin gluster peer info list of peers other than localhost
09:06 vincenzoml Ok, but why is the number of peers 0 then?
09:08 vincenzoml but then I try to create a volume with "gluster volume create gv0 MYHOST:/home/gluster" and I see "host is not is not in ' Peer in Cluster' state"
09:08 vincenzoml that's using the ddns.net domain
09:12 jiffin vincenzoml: may it is not resolving properly, can u try with an ip?
09:15 vincenzoml jiffin: tried, same error; the host name resolves correctly (tried with ssh)
09:15 nishanth joined #gluster
09:15 vincenzoml (but I tried the numeric ip anyway)
09:16 zakharovvi[m] joined #gluster
09:16 jiffin vincenzoml: that's weird
09:17 vincenzoml I will uninstall and purge glusterfs package and retry; just for safety, are the ports listed here the correct ones? https://www.jamescoyle.net/how-to/457-glusterfs-firewall-rules
09:17 glusterbot Title: GlusterFS firewall rules – JamesCoyle.net (at www.jamescoyle.net)
09:20 jiffin vincenzoml: can u check /var/log/glusterfs/glusterd.log ?
09:20 jiffin if there are any errors
09:20 skoduri joined #gluster
09:21 vincenzoml yes, let us check the forwarded ports in the meantime: I forwarded (from router to server) ports 24007, 24008, 49152, 111 (that's what I understand I have to do for single volume, single brick fs on v3.10)
09:23 vincenzoml I don't have glusterd.log, only other log files, but let me first purge and reinstall and also remove /var/log/glusterfs, you never know
09:26 skumar_ joined #gluster
09:26 ayaz How do you stop the gluster service on a node? Stopping the glusterfs-server service does not seem to do much. Several glusterfs* processes continue to run.
09:26 vincenzoml jiffin: does the directory that I use for volume creation require any special permissions or user? It's owned by root right now
09:28 vincenzoml jiffin: https://pastebin.com/bainajFJ actually there's an error related to UUID
09:28 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
09:30 saybeano joined #gluster
09:31 jiffin1 joined #gluster
09:32 jiffin1 vincenzoml: i don't think so
09:32 askz hi, I have two replica nodes, one is alright, but the second is having a ~35gig difference between the brick and the mounted point
09:32 askz any ideas?
09:34 vincenzoml before going on let me ask a crucial question: does glusterfs support disconnected operation from clients? I somewhat assumed that based on things I've read around but now I think I never checked that properly
09:34 vincenzoml as I wanted to learn it anyway
09:34 askz aaaand also, I have this when I do gluster volume status : https://paste.fedoraproject.org/paste/TAyKYHlqagZbPjI75vjvK15M1UNdIGYhyRLivL9gydE= (the failing node is web3.par)
09:34 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
09:36 jiffin vincenzoml: issue looks like me glusterd is not resolving hostname properly
09:37 vincenzoml may very well be, but I don't know how to check; if I run "host" on the same machine I get an IP which translates to the server
09:37 vincenzoml translates, routes, whatever
09:37 jiffin askz: your one node is down, so data is not replicated that node
09:37 jiffin vincenzoml: what u mean by client disconnects?
09:38 vincenzoml jiffin: I started experimenting this morning as I need a way to keep a filesystem in sync with my laptop, supporting offline operation for a while, meaning that when the laptop is disconnected I can still use the filesystem
09:39 askz jiffin: I can confirm its up...
09:39 vincenzoml I've searched the web and found suggestions for glusterfs, but who knows what they meant. Maybe they meant I should run a replica on the laptop, for example. But anyway I decided to check it for myself.
09:39 askz I have some logs : https://paste.fedoraproject.org/paste/aqCuXBu3FzNIIxP5gnYrOV5M1UNdIGYhyRLivL9gydE=
09:39 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
09:40 askz 0-management: got disconnect from stale rpc on /srv/gluster/home I think that's the problem
09:42 jiffin vincenzoml: with single brick volume , u will hit those kinda of issues
09:42 jiffin either use replicated/disperse volumes for that
09:42 askz isn't the problem here : 0-www-shared-posix: Extended attribute not supported, exiting.
09:43 nishanth joined #gluster
09:43 vincenzoml jiffin: I will, one step at a time I will; do you mean I should add my laptop as a replicated volume so that I can disconnect it temporarily?
09:43 amarts joined #gluster
09:44 vincenzoml jiffin: going back to the issue, if I use only the hostname instead of the full fqdn path volume creation works. I don't know why the fqdn doesn't resolve in gluster, is there anything I can do to undestand this?
09:44 jiffin vincenzoml: can u add in /etc/hosts and try that
09:44 jiffin s/that/again
09:46 cloph what is "disconnected operation from clients" by the way?
09:47 vincenzoml "offline operations", it's I edit a file when the machine is not connected to the server
09:48 vincenzoml of course this calls in problems of possible forks and that's why it's very hard to do it, so I thought a cluster file system may provide facilities for that
09:50 vincenzoml now I can't mount my fs from the machine where I'm typing. May be a problem of firewall setup; I get "mount failed, please check the log", question is "what log?"
09:50 Wizek_ joined #gluster
09:50 askz jiffin: any ideas? My node one is started but gluster can't see the pid. It disappear completely if I stop it, and reappear if I start it, but without PID
09:51 jiffin askz: may be it is crashing
09:51 Tanner_ joined #gluster
09:51 askz vincenzoml: you should use something like syncthing for this kind of things
09:51 jiffin can check the brick logs at machine
09:52 askz https://paste.fedoraproject.org/paste/gdgzVBdc4crjwnwdXdXrI15M1UNdIGYhyRLivL9gydE=
09:52 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
09:52 askz I think this is the last msg causing problem
09:52 askz but the second node have more files than this one and not crashing :/
09:54 atinm joined #gluster
09:54 cloph No, offline operations are not supported by gluster
09:55 vincenzoml hmm
09:55 shdeng joined #gluster
09:55 vincenzoml well, anyway I also want to replicate the server to another one, so I will keep studying this. Now I need to understand why I can't connect. I see nothing in the brick log so I suspect firewall may be the problem
09:57 skumar joined #gluster
09:58 cloph if you want to replicate, but don't care if there's a delay, then you could look into geo-replication.
09:58 cloph That has a annoying bug with symlinks though.
09:59 ppai joined #gluster
10:01 rastar joined #gluster
10:02 askz how can I recover which file correspond to this : gfid=66d193e0-ca6b-44e1-81ba-c27f5214133c ?
10:03 cloph not sure what you mean with recover, but the file in the .gluster/66/d1/ directory on your brick is that file.
10:03 cloph don't mess around with the brick directory though (don't manually delete or add files there)
10:03 cloph provide more details on what you mean with "recover"
10:03 askz I mean find the correct file
10:04 askz because it's this one because my brick wont start
10:04 theeboat joined #gluster
10:04 askz sorry for my english ahah
10:04 askz I mean this file is causing my brick to fail on start.
10:04 poornima joined #gluster
10:04 askz [posix.c:5666:_posix_handle_xattr_keyvalue_pair] 0-www-shared-posix: fsetxattr failed on gfid=66d193e0-ca6b-44e1-81ba-c27f5214133c while d
10:07 theeboat Hi, I was looking for some advise on which sort of volume configuration i should go for. Current HW config, 10 servers with 2x 10cores CPU, 128GB RAM each, each server has two raid6 volumes of with 22~ disks per pool. I intend to use 6 of the servers for a primary storage and then the other four for a backup. What is everybodys thoughts on the volume type i should pick (stripe, distributed, replicated or a mix). Thanks
10:10 cloph it is hardlinked to the hierarchy on the brick, so ls -i for the 66.... file, then you could use find /brick -inum <theinoudenumber>
10:11 cloph likely there is a better way with some translator layer, but as the volume fails to start/mount, won't help here..
10:12 cloph but I doubt that this really is the error. is that really the only line that sticks out? what other lines are there before/after the one you pasted?
10:12 Tanner_ joined #gluster
10:12 cloph theeboat: if you cannot change the disk-layout (i.e. you continue to use raid6), then use a distributed-replicated.
10:13 cloph Although it's better to have an odd number of peers, and not sure what you meanyou use the other four as backup.
10:13 msvbhat joined #gluster
10:14 cloph I mean with gluster and replicated storage you already have redundancy, I mean with raid6 you have even more redundancy.
10:14 cloph (so gluster-replication is only to allow for servers to go down and still be available, not account for disk-failure)
10:15 cloph with raid6 bricks it is kinda overcompensating/you sacrifice performance with that..
10:15 cloph @stripe
10:15 glusterbot cloph: (#1) Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes., or (#2) The stripe translator is deprecated. Consider enabling sharding instead.
10:15 cloph → so the only advice that holds no matter what: you should not use striped volumes :-)
10:17 skoduri joined #gluster
10:20 askz cloph: there it is https://paste.fedoraproject.org/paste/CbJyoyppsAoO~7omCTmXEV5M1UNdIGYhyRLivL9gydE=
10:20 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
10:21 Peppard joined #gluster
10:24 askz it is exiting because of this extended attribute thing right?
10:24 cloph askz: you missed the "Read-only file system" bit in your last paste..
10:25 cloph gluster relies on being able to write to the brick, so check why your brick volume is read-only.
10:25 askz hmm weird.
10:26 askz I have error=remount-ro option so apparently it were remounted
10:30 atinm joined #gluster
10:31 theeboat cloph: thanks for the advise. The backup is essentially going to be another cluster at an offsite location with exact replica of what is on the first volume.
10:31 askz fsck'ing the fs right now. thanks cloph for your wide open eyes
10:33 theeboat Do you know what percentage of space I will have to sacrifice when going with distributed-replicated as opposed to distributed?
10:34 Prasad_ joined #gluster
10:37 sanoj joined #gluster
10:38 skumar joined #gluster
10:41 Karan joined #gluster
10:44 Tanner_ joined #gluster
10:45 Prasad joined #gluster
10:51 cloph theeboat: with replica 2, you need twice the space, with replica 3 you'd save everything 3 times, etc
10:51 cloph so if possible I'd move away from the raid6 and use gluster replication
10:52 cloph (the middleway replica 2 with an arbiter will save two copies of the files, and additionally only the metadata, which doesn't take up much space)
10:53 cloph I don't think it is neccesssary to go higher than replica 3 (or 2+1 for that matter), but of course depends on your requirements (and whether you count your raid-level duplication/redundancy as well)
10:54 amarts joined #gluster
10:57 theeboat cloph: do you think going with distributed and keeping the raid6 offers an adequate amount of redundancy?
10:58 cloph might be in the terms of data redundancy, but not the same if e.g. one of the servers needs maintenance and is taken offline.
10:58 cloph with only distrubted, you'll no longer have the files available that were only stored on that node's bricks
11:00 cloph what is "adequate" or not depends on what matters most/more to you: Having the data *accessible* at all time/without interruption, or not *loosing* any data in case of hardware failures
11:00 theeboat How does the cluster react when you take a node offline in a distributed environment? does the whole volume become unavailable or would the files stored on the server just be unavailable?
11:01 cloph (although imagine a broken raid controller corrupts all your data on one node, with distributed only the files on that node will be gone. Your node catches fire and destroys all disks at once → files gone...)
11:01 cloph files become unavailable
11:02 theeboat thats cleared things up a lot for me. I'm also guessing with a striped/replicated structure the client performance will be better when clients are accessing the same files
11:03 p7mo joined #gluster
11:03 nh2 joined #gluster
11:04 theeboat the content is primarily video files 120GB~ in size. However we are using a 25Gb network with the majority of clients accessing over 1Gb and a small number of servers accessing on 10Gb.
11:04 cloph see the notes from the bot above re stripe
11:04 theeboat Ah sorry, just read that now.
11:07 msvbhat joined #gluster
11:07 cloph and again: if you're not bound to use raid6 on the server, I'd rather use glusters replication and distribute the replicas over different nodes, than store the duplicate files in the raid disks of the very same host
11:07 amarts cloph, you may need to consider the 'shard' volume for large files
11:07 cloph that is also what the bot says :-)
11:08 cloph but shard is not a special volume type, it is a volume option
11:08 theeboat How does the structure work in terms of bandwidth. If a single client is connecting will they instantly be able to utilize all the bandwidth available or is a client giving a specific share of bandwidth no matter what the current load is. The reason why i ask if from my initial tests using 5 servers @ 10Gb one client was able to read at 250MB/s~ but when I started reading from another couple machines they were seeing similar read
11:08 theeboat wasnt effected by the other clients connecting
11:09 cloph if using fuse mount, it will connect to all servers, if it is nfs, it will only talk to the single server
11:09 theeboat is that still the case when using nfs ganesha?
11:09 cloph gluster itself doesn't do any throttling
11:10 cloph the client will still use a regular nfs mount, and while ganesha (the nfs server) has better view of the volumes, the client still will pass all thorugh the single link to ganesha
11:10 Tanner_ joined #gluster
11:11 theeboat ah ok, i though ganesha did some load balancing using a virtual IP connecting users to the node with the lowest latency which in most cases i would assume would be the node with the least amount of load
11:12 cloph yeah, but then still would only be one node to process the nfs stuff. That virtual IP helps to transparently switch to another one in case the first one disappears/has problems
11:13 cloph but the clients it is a regular nfs mounts, and thus it only communicates with that single server
11:13 cloph so depends on what that server has in terms of bandwidth/whether that is limiting the transfer.
11:14 theeboat network bandwidth shouldn't be too much of a worry. there are two 25Gb network interfaces, one for the san network and one specifically for a gluster network
11:15 kkeithley theeboat. When you set up ganesha through the gluster CLI, you get "high availability" (or HA) through the virtual IPs. If a ganesha.nfsd crashes (or the whole node goes down) the virtual IPs move to another host
11:16 kkeithley there's no load-based load balancing.  (nothing anywhere says that, I'm not sure where anyone gets that idea)
11:19 kkeithley NFS clients mount from one of the virtual IPs. They don't care which node the VIP is actually on.
11:21 theeboat @kkeithley: is there any mechanisms available that would provide that sort of load balancing or is just HA thats available? I'm also assuming that using samba & ctdb is again HA rather than load balancing?
11:22 ppai joined #gluster
11:22 kkeithley and yes, nfs-ganesha is just like legacy gnfs in that the client mount from one IP (a VIP) and continues to talk to that VIP. The difference is that the VIPs move when there's a failure.
11:23 kkeithley theeboat: as I said above, it's only HA, no load balancing.
11:24 kkeithley Yes, AFAIK samba+ctdb is only HA as well.
11:24 cloph theeboat: don't quite get that. ganesha itself connects to the bricks/nodes
11:24 kkeithley obnox: ^^^
11:25 kkeithley cloph: correct, ganesha is itself a client of gluster (using gfapi)
11:25 kkeithley hmmm, obnox isn't here
11:25 cloph or otherwise put: If bandwidth is not the problem on the nodes, what would you like that loadbalancing to do?
11:31 theeboat Is SMB the only way to connect windows clients to the volume? Can't seem to find anything online about using the fuse mount
11:32 theeboat The clients that I have which are likely to max out a single 25Gb connection are windows clients
11:32 Tanner_ joined #gluster
11:32 kkeithley there is no FUSE client for Windows. Newer releases of Windows have NFS. I have no idea how well that works
11:33 theeboat i would assume not very well
11:33 kkeithley lol
11:34 cloph ah yeah, Microsoft removed the nfs services from non-enterprise editions :-/
11:34 theeboat $$$$$ is always the case with Microsoft
11:35 panina joined #gluster
11:35 theeboat im guessing the lack of fuse support on windows is also the reason why the client hasn't been ported aswell?
11:52 Tanner_ joined #gluster
12:03 kkeithley Dunno. If I had to guess, I'd guess Windows has some kind of interface, but probably nothing like the Linux FUSE interface.  The BSDs and even MacOSX have Linux FUSE kernel modules.
12:04 kkeithley Historically tho' Gluster hasn't had many Windows devs.
12:10 kkeithley Reminder: GlusterFS Community Meeting today at 15:00 UTC in #gluster-meeting
12:15 Tanner_ joined #gluster
12:18 Prasad_ joined #gluster
12:23 [diablo] joined #gluster
12:28 shyam joined #gluster
12:32 Tanner_ joined #gluster
12:34 jiffin joined #gluster
12:37 ppai joined #gluster
12:43 vbellur joined #gluster
12:47 baber joined #gluster
12:51 shyam joined #gluster
12:53 Philambdo joined #gluster
12:55 jiffin joined #gluster
13:11 dominicpg joined #gluster
13:15 kpease joined #gluster
13:16 unclemarc joined #gluster
13:22 kpease joined #gluster
13:25 Tanner_ joined #gluster
13:25 msvbhat joined #gluster
13:27 lkoranda joined #gluster
13:29 ppai joined #gluster
13:34 kpease joined #gluster
13:34 ira joined #gluster
13:41 Tanner_ joined #gluster
13:44 skylar joined #gluster
13:46 nbalacha joined #gluster
13:48 askz finally I can't repair the fs, seems broken and remounting in ro everytime. My question is : if I stop glusterfs on the failing node, then format the failing drive then restart, what's gonna happen? Or maybe, what's the solution the re-sync entirely the second node
13:48 askz the failing node*
13:51 cloph if you already have a spare, use replace-brick before.
13:51 askz I don't :/
13:51 cloph as to what other things happen when the node goes down depends on your configuration. You might fall below quorum
13:52 askz at this time it doesn't matter a lot coz it's not live. but scares me a lot for the next days...
13:53 askz I was already thinking restart the entire process of creating bricks etc. and rsync all the data back....
13:53 askz But if I can save time and not rsyncing again it can be cool
13:54 Tanner_ joined #gluster
13:54 cloph if you nuke the brick, then it has to resync (heal) - so if you cannot fix the filesystem, a resync is inevitable. I mean you cannot have the data magically appear on a new filesystem.
13:55 cloph OTOH: if you cannot repair the filesystem  (what is the error you get when you try to ?) then there is not much confidence that the new filesystem on the same hardware will work reliably.
13:55 gyadav joined #gluster
13:55 cloph but if the volume isn't use, umount the volume wherever it is mounted, and stop the volume.
13:55 scuttle|afk joined #gluster
13:56 cloph Then you can do whatever you want without gluster caring about it :-)
13:56 kkeithley Reminder: GlusterFS Community Meeting today at 15:00 UTC (one hour from now) in #gluster-meeting
13:57 susant joined #gluster
13:57 askz okay thanks
13:57 askz last question, if I put data on the brick when gluster is off, will it take in account those files and stream them?
13:58 cloph no, manually  adding stuff to the brick is a sure way to create inconsistencies/mess up the gluster volume.
13:59 kkeithley don't ever touch the brick. (don't ever cross the streams)
13:59 askz ^^
14:00 askz thanks
14:00 jiffin joined #gluster
14:02 ndevos kkeithley: what about Frogger then?
14:02 cloph who you gonna call…
14:03 kkeithley Different stream crossing
14:04 [o__o] joined #gluster
14:04 rwheeler joined #gluster
14:05 shyam joined #gluster
14:05 askz so I fsck -y the volume then start it then mount it. and then, in the next minute :
14:05 askz https://paste.fedoraproject.org/paste/xcF~2ruSStWuDNMLso8nSF5M1UNdIGYhyRLivL9gydE=
14:05 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
14:06 cloph askz: you should check your host's system logs to see why it does remount the volume read-only
14:06 jiffin joined #gluster
14:10 buvanesh_kumar joined #gluster
14:10 squizzi_ joined #gluster
14:10 askz apparently it's nbd related. got a lot of block nbd2: Attempted send on closed socket
14:10 skoduri joined #gluster
14:11 askz in dmesg. my instance is a scaleway C2
14:11 askz trying to reboot right now to see if it continues
14:13 Tanner_ joined #gluster
14:17 rafi1 joined #gluster
14:26 jdossey joined #gluster
14:28 Tanner_ joined #gluster
14:28 plarsen joined #gluster
14:31 buvanesh_kumar joined #gluster
14:31 baber anyone out there running geo-rep on distributed-dispersed with a high brick count and large number of files ?
14:36 oajs joined #gluster
14:39 kotreshhr left #gluster
14:40 ahino1 joined #gluster
14:43 Tanner_ joined #gluster
14:44 riyas joined #gluster
14:47 askz cloph: it was provider related problem... seems to be okay now.
14:47 askz thanks a lot :)
14:50 farhorizon joined #gluster
14:58 nbalacha joined #gluster
14:58 kkeithley GlusterFS Community Meeting starting now #gluster-meeting
14:59 farhorizon joined #gluster
15:08 Asako joined #gluster
15:08 wushudoin joined #gluster
15:08 Asako Hello.  Is it safe to delete the .glusterfs directory?  I'm no longer using gluster but there appears to be several files in this directory.
15:09 wushudoin joined #gluster
15:11 dominicpg joined #gluster
15:14 cornfed78 joined #gluster
15:14 JoeJulian left #gluster
15:14 JoeJulian joined #gluster
15:14 cornfed78 Hi everyone. I'm setting up a Gluster 3.10 server with NFS Ganesha. It's working, but I see this error in the logs: "nfs_in_grace :STATE :EVENT :NFS Server Now NOT IN GRACE" - what does that mean?
15:14 JoeJulian Asako: yes
15:15 R0ok_ joined #gluster
15:15 Tanner_ joined #gluster
15:18 baber joined #gluster
15:24 Tanner_ joined #gluster
15:25 Asako JoeJulian: thanks
15:26 JoeJulian cornfed78: I forget off the top of my head. If you can find the videos from the 2016 gluster developer conference, Kaleb has a presentation on the whole HA process for Ganesha.
15:28 JoeJulian cornfed78: https://www.youtube.com/watch?v=3mof2XerU6Y
15:33 ankitr joined #gluster
15:33 gospod2 joined #gluster
15:35 squizzi_ joined #gluster
15:38 nbalacha joined #gluster
15:42 cornfed78 JoeJulian: Thanks!
15:45 kpease joined #gluster
15:46 baber joined #gluster
15:47 Tanner_ joined #gluster
15:51 ahino joined #gluster
15:51 askz hello again, why is there size difference between brick and mounted volume ? https://paste.fedoraproject.org/paste/ghBjuKaqBPCcZlrK9ZEHDl5M1UNdIGYhyRLivL9gydE=
15:51 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
15:51 askz I'm currently healing the web3 node (the one failing)
15:51 askz it's mostly fresh install
15:52 askz and which logs shall I look for errors/report?
15:53 JoeJulian askz: Probably sparse files on web2 that are no longer sparse on web3.
15:56 farhorizon joined #gluster
16:02 askz JoeJulian: they will get deleted automatically ?
16:03 JoeJulian askz Do you know what a sparse file is?
16:04 JoeJulian https://en.wikipedia.org/wiki/Sparse_file
16:04 glusterbot Title: Sparse file - Wikipedia (at en.wikipedia.org)
16:08 askz mmmmh. thanks for the clarification
16:09 JoeJulian It always surprises me how few people know about that. :)
16:11 askz in fact I couldn't put a word on that, but was knowing the effect of the word
16:12 askz so is that a problem ?
16:12 askz ("sparse files on web2 that are no longer sparse on web3")
16:15 askz I think its more the inverse. Because I first copied those files in web3, (rsync) and now web2 is healing and the size keeps augmenting.
16:15 askz (not sure about my first sentence ahah)
16:20 gyadav joined #gluster
16:21 Jules- joined #gluster
16:27 baber joined #gluster
16:29 Gambit15 joined #gluster
16:30 Tanner_ joined #gluster
16:38 mallorn joined #gluster
16:39 Tanner_ joined #gluster
16:39 farhorizon joined #gluster
16:55 shyam joined #gluster
17:12 Tanner_ joined #gluster
17:14 jiffin joined #gluster
17:14 susant left #gluster
17:19 major hurm ..
17:20 major JoeJulian, that email to gluster-devel "stat() returns invalid file size when self healing" sounds like it might be related to git claiming my repo was corrupt during heal
17:23 rafi joined #gluster
17:24 susant joined #gluster
17:30 amarts joined #gluster
17:32 Tanner_ joined #gluster
17:34 baber joined #gluster
17:34 jiffin1 joined #gluster
17:39 shyam joined #gluster
17:48 derjohn_mob joined #gluster
17:49 R0ok_ joined #gluster
17:51 susant joined #gluster
17:54 kpease joined #gluster
18:04 ron-slc joined #gluster
18:07 susant left #gluster
18:08 R0ok_ joined #gluster
18:17 baber joined #gluster
18:18 rastar joined #gluster
18:19 arpu joined #gluster
18:22 jiffin1 joined #gluster
18:30 rafi joined #gluster
18:32 ahino joined #gluster
18:39 Tanner_ joined #gluster
19:05 Tanner_ joined #gluster
19:17 rafi joined #gluster
19:28 farhorizon joined #gluster
19:36 baber joined #gluster
19:53 farhorizon joined #gluster
19:58 farhorizon joined #gluster
20:12 msvbhat joined #gluster
20:20 baber joined #gluster
20:30 Asako left #gluster
20:48 oajs joined #gluster
20:54 jkroon joined #gluster
20:59 Tanner_ joined #gluster
21:01 shyam joined #gluster
21:40 Tanner_ joined #gluster
21:45 john51 joined #gluster
21:50 john51 joined #gluster
22:04 MrAbaddon joined #gluster
22:13 Tanner_ joined #gluster
22:16 musinsky left #gluster
22:17 oajs joined #gluster
22:19 scuttle` joined #gluster
22:21 bfoster joined #gluster
22:32 MrAbaddon joined #gluster
22:35 bfoster joined #gluster
22:39 Tanner_ joined #gluster
22:59 Tanner_ joined #gluster
23:16 victori joined #gluster
23:17 Tanner_ joined #gluster
23:26 Tanner_ joined #gluster
23:33 cliluw joined #gluster
23:47 sysanthrope joined #gluster
23:55 sysanthrope joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary