Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 masber joined #gluster
01:26 d-fence joined #gluster
01:30 arpu joined #gluster
02:03 derjohn_mob joined #gluster
02:36 susant joined #gluster
02:37 susant left #gluster
02:39 farhorizon joined #gluster
02:49 Humble joined #gluster
02:52 kramdoss_ joined #gluster
03:24 rafi joined #gluster
03:26 prasanth joined #gluster
03:42 purpleidea joined #gluster
03:42 purpleidea joined #gluster
03:44 magrawal joined #gluster
03:51 rafi joined #gluster
04:05 susant joined #gluster
04:06 rastar joined #gluster
04:15 rejy joined #gluster
04:17 dominicpg joined #gluster
04:24 dominicpg joined #gluster
04:28 aravindavk joined #gluster
04:29 riyas joined #gluster
04:34 skumar joined #gluster
04:36 hgowtham joined #gluster
04:40 JoeJulian misc, nigelb: ^^ re: d.g.o certificate
04:42 ankitr joined #gluster
04:47 buvanesh_kumar joined #gluster
04:47 Shu6h3ndu_ joined #gluster
04:49 nbalacha joined #gluster
04:55 susant left #gluster
04:58 karthik_us joined #gluster
04:59 sanoj joined #gluster
05:12 rafi joined #gluster
05:12 kdhananjay joined #gluster
05:16 apandey joined #gluster
05:21 [diablo] joined #gluster
05:26 ndarshan joined #gluster
05:26 percevalbot joined #gluster
05:34 jiffin joined #gluster
05:34 nthomas joined #gluster
05:40 Humble joined #gluster
05:47 IRCFrEAK joined #gluster
05:50 IRCFrEAK left #gluster
05:54 Philambdo joined #gluster
05:55 prasanth joined #gluster
06:07 nigelb JoeJulian: thanks, I'll file a bug to get it fixed.
06:07 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
06:11 level7 joined #gluster
06:24 Karan joined #gluster
06:29 jtux joined #gluster
06:30 jtux left #gluster
06:34 level7 joined #gluster
06:43 jiffin1 joined #gluster
06:48 ivan_rossi joined #gluster
06:49 ivan_rossi left #gluster
06:51 kotreshhr joined #gluster
07:02 riyas joined #gluster
07:16 mbukatov joined #gluster
07:17 rastar joined #gluster
07:19 tru_tru joined #gluster
07:26 Wizek_ joined #gluster
07:35 jiffin1 joined #gluster
07:49 Prasad joined #gluster
07:53 ahino joined #gluster
07:54 riyas joined #gluster
07:56 skoduri joined #gluster
07:58 Jules- joined #gluster
07:59 Jules- joined #gluster
08:07 derjohn_mob joined #gluster
08:12 jkroon joined #gluster
08:16 apandey_ joined #gluster
08:21 apandey__ joined #gluster
08:25 Prasad_ joined #gluster
08:26 panina joined #gluster
08:28 Prasad__ joined #gluster
08:32 ppai joined #gluster
08:32 atinm joined #gluster
08:33 Wizek_ joined #gluster
08:40 sanoj joined #gluster
08:44 shwethahp joined #gluster
08:52 jiffin1 joined #gluster
08:52 level7_ joined #gluster
08:56 cliluw joined #gluster
08:59 derjohn_mob joined #gluster
09:04 aravindavk joined #gluster
09:08 skoduri joined #gluster
09:19 sanoj joined #gluster
09:19 panina joined #gluster
09:27 DV joined #gluster
09:28 bartden joined #gluster
09:32 bartden Hi, we are running a sequence of file operations on files stored on gluster (3.7). We first zip it (read from gluster, output to gluster) and then encrypt it using AES (read from gluster, write to gluster). It happens occasionally that the file is not unzippable anymore … any suggestions where i need to look? Caching ?
09:35 ndevos bartden: what version of 3.7.x is that? And are you aware that 3.7 will not receive any updates anymore?
09:35 ndevos bartden: you could try with disabling write-behind, it is/was fragile in certain use-cases
09:36 bartden hi ndevos 3.7.5-1.el6  (thanks for the reminder :) )
09:38 ndevos bartden: in that case, there are still *many* fixes available for you, maybe you can find something in the release notes of newer 3.7 versions at https://github.com/gluster/glusterfs/tree/release-3.7/doc/release-notes/
09:38 glusterbot Title: glusterfs/doc/release-notes at release-3.7 · gluster/glusterfs · GitHub (at github.com)
09:53 atinm joined #gluster
09:54 bartden ndevos so by disabling performance.flush-behind i disable the write-behind correct?
09:55 ndevos bartden: flush-behind is something else, I think there is a performance.write-behind option too
09:56 bartden ok thx
09:57 ndevos bartden: after you set the option, it would be safest to unmount and mount the volume on the client again
09:57 R0ok_ joined #gluster
09:57 bartden ok, no need in restarting the volume then?
09:57 ndevos no, it only affects the client-side
10:01 MikeLupe joined #gluster
10:18 rideh joined #gluster
10:29 jwd joined #gluster
10:38 riyas joined #gluster
10:43 caitnop joined #gluster
10:46 skoduri joined #gluster
11:06 level7 joined #gluster
11:10 kotreshhr left #gluster
11:21 derjohn_mob joined #gluster
11:41 rafi1 joined #gluster
11:43 skoduri joined #gluster
11:54 riyas joined #gluster
11:54 XpineX joined #gluster
11:57 jiffin joined #gluster
11:59 Jules- joined #gluster
12:05 skoduri joined #gluster
12:06 Jules- joined #gluster
12:12 nh2 joined #gluster
12:13 DV joined #gluster
12:21 jiffin1 joined #gluster
12:22 prasanth joined #gluster
12:26 jiffin joined #gluster
12:33 baber joined #gluster
12:38 kpease joined #gluster
12:39 unclemarc joined #gluster
12:44 PTech joined #gluster
13:06 rafi1 joined #gluster
13:23 shyam joined #gluster
13:25 squizzi joined #gluster
13:31 kkeithley bwerthmann: dunno.  Another fresh install of trusty. Installed 3.7.20, created volume, started, mounted, unmounted, stopped.
13:31 kkeithley updated to 3.8.10 with no issues or crash. started same volume, mounted, umounted, stopped.
13:32 kkeithley not sure what to tell you. :-/
13:37 skylar joined #gluster
13:44 bartden Hi, how can i make sure that glusterfs flushes all written data directly to disk? Because writing and reading the file directly after read sometimes gives strange behaviours. I already disabled write-behind …. but still same effect
13:44 Klas glusterfs is not atomic
13:44 mlhess joined #gluster
13:44 Klas unfortunately
13:46 jiffin joined #gluster
13:55 plarsen joined #gluster
13:58 bartden Klas what do you mean by it?
13:59 Klas basically, guaranteed that when the file is written, the file is actually "finished writing" =P
13:59 Klas so to speak
13:59 Klas http://www.informit.com/articles/article.aspx?p=99706&seqNum=11 seems to be a better explanation than I am capable of
13:59 glusterbot Title: Atomic Operations | Advanced Programming in the UNIX® Environment: UNIX File I/O | InformIT (at www.informit.com)
13:59 bartden ok, so there is no solution for this kind of issue?
13:59 ira joined #gluster
14:00 bartden I know the concept of atomic operations (databases and stuff)
14:01 fsimonce joined #gluster
14:08 Klas I might be wrong and there might be some way, I haven't looked that hard at it
14:09 Klas we noticed the issue and realized it made it unusable for a test-case
14:09 Klas but I wasn't working on that test case, so might've missed something
14:09 mambru joined #gluster
14:10 mambru joined #gluster
14:11 lanwatch joined #gluster
14:12 lanwatch hi, I have a doubt
14:13 lanwatch I lost all trace of the /var/lib/glusterd directories in the nodes
14:13 lanwatch but i still have the data intact in the bricks
14:13 lanwatch can I still recover the data by re-creating the volume?
14:18 icebear_ joined #gluster
14:18 icebear_ hi ...
14:20 lanwatch joined #gluster
14:20 lanwatch hi icebear_
14:21 icebear_ Hope you can help me .... have the following plan ... 1 node with 5 bricks (each brick on a separete Harddisk) ... actually configured with shard on replica count 2 .... 4 disks are online but i cant get 5th online ... plan is to ensure all files (or shards of files) sit on min of 2 disks .... is this kind possible ?
14:21 Drankis joined #gluster
14:27 icebear_ hm nobody here ?
14:33 susant joined #gluster
14:34 oajs joined #gluster
14:34 susant left #gluster
14:34 lanwatch lots of people, but nobody willing to help :)
14:34 lanwatch it's fair I guess
14:35 JoeJulian lanwatch: Or nobody awake... ;)
14:35 MrAbaddon joined #gluster
14:35 lanwatch I will try myself to do a small testcase with a scratch gluster and see if I can do what I want
14:35 JoeJulian one sec.
14:35 lanwatch then I will report here if I succeed
14:38 JoeJulian lanwatch: if you create your new volume with your bricks unmounted but using the same paths, then after the volume is created but before it's started mount your bricks. You can then either retrieve the old volume-id and update the info files (on every server) or you can get the new volume-id and update the bricks.
14:38 farhorizon joined #gluster
14:39 JoeJulian lanwatch: The volume-id is in /var/lib/glusterd/vols/$volume/info and on the bricks it's in trusted.glusterfs.volume-id on the brick root.
14:39 JoeJulian Just make them match and you can then start your volume.
14:41 JoeJulian bartden: glusterfs is posix, so you do that using o_direct like you would with any posix filesystem.
14:42 lanwatch thanks JoeJulian!!! will try right away
14:44 mallorn We're running gluster 3.10 and have a new problem that started this weekend.   gluster volume heal [volname] info doesn't return anything; it just hangs indefinitely.
14:45 mallorn We see this in the logs when the command is run:
14:45 mallorn [2017-04-03 14:44:09.913296] I [socket.c:2404:socket_event_handler] 0-transport: disconnecting now
14:45 JoeJulian Well that's about as unhelpful as can be. :/
14:46 JoeJulian I think I would start by restarting all my glusterd management daemons.
14:47 arpu what is the bestway to update from 3.10 to 3.10.1 ?
14:48 oajs joined #gluster
14:49 Guest59698 joined #gluster
14:49 shyam joined #gluster
14:49 JoeJulian update one server at a time, wait for self-heals to finish between each one, then update clients.
14:50 JoeJulian @learn update as Update one server at a time, wait for self-heals to finish between each one, then update clients.
14:50 glusterbot JoeJulian: The operation succeeded.
14:57 arpu JoeJulian, https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/   like the update to 3.10?
14:57 glusterbot Title: Upgrade to 3.10 - Gluster Docs (at gluster.readthedocs.io)
14:58 JoeJulian arpu: yep
15:00 wushudoin joined #gluster
15:01 wushudoin joined #gluster
15:01 arpu thx
15:04 annettec joined #gluster
15:10 mallorn @JoeJulian, you suggested restarting glusterd.  Will that trigger a self-heal after each restart?  I would test it, except I can't check the healing status afterwards.
15:13 JoeJulian mallorn: My expectation is that would clear up the lock that's preventing you from checking the status.
15:13 JoeJulian Assuming it is a lock.
15:16 bartden JoeJulian, to do this i enable it via mount option direct-io-mode=enable and set performance.strict-o-direct to on on the volume?
15:17 MrAbaddon joined #gluster
15:17 JoeJulian bartden: I always just set the flag on open()
15:17 bartden and how do i do that?
15:18 shyam joined #gluster
15:21 JoeJulian int open(const char *pathname, int flags); where "flags" includes O_DIRECT.
15:21 JoeJulian http://man7.org/linux/man-pages/man2/open.2.html
15:22 glusterbot Title: open(2) - Linux manual page (at man7.org)
15:29 annettec left #gluster
15:40 vbellur joined #gluster
15:45 sanoj joined #gluster
16:00 mallorn OK, thank you @JoeJulian.  I'm going to wait until our scheduled maintenance window tomorrow to restart glusterd everywhere just in case.
16:08 lanwatch ok JoeJulian, quick update, I managed to do it by just simply passing the force option to the volume create command
16:08 lanwatch it happily took my old bricks and the data is is place, like nothing happened
16:08 lanwatch thanks everyone!
16:09 JoeJulian lanwatch: Oh, cool. Thanks for letting me know, I'll keep that in mind for the next poor soul. :D
16:09 lanwatch :)
16:25 atinm joined #gluster
16:26 kkeithley joined #gluster
16:31 farhorizon joined #gluster
16:33 jkroon_ joined #gluster
16:50 om3 joined #gluster
16:54 om2 joined #gluster
16:56 jiffin joined #gluster
17:02 tom[] joined #gluster
17:04 kkeithley joined #gluster
17:09 rafi joined #gluster
17:10 tom[] joined #gluster
17:11 Karan joined #gluster
17:12 farhorizon joined #gluster
17:13 Humble joined #gluster
17:26 saduser joined #gluster
17:26 saduser Hello... i have ganesha saying "Health status is unhealthy". what is that correlated with? how can i fix it?
17:30 farhorizon joined #gluster
17:42 squizzi_ joined #gluster
17:46 rastar joined #gluster
18:05 R0ok_ joined #gluster
18:13 msvbhat joined #gluster
18:14 squizzi_ joined #gluster
18:33 skylar joined #gluster
18:36 Humble joined #gluster
18:41 john51_ joined #gluster
19:04 mallorn I have a disperse-distributed (5 x (2+1)) filesystem.  What happens if I remove a file from the three servers it's hosted on?  Will that effectively remove the file, or will that confuse gluster?  Standard filesystem commands hang.
19:05 mallorn Is it preferred to do this with 'gfrm' so the calls are made through the API?
19:26 derjohn_mob joined #gluster
19:28 mallorn Got it.  I had to set features.locks-revocation-secs and my hanging issues stopped, as did the ability to do a 'gluster volume heal [volname] info'.
19:41 mallorn I mean inability to do that command.
19:58 squizzi_ joined #gluster
20:10 baber joined #gluster
20:18 R0ok_ joined #gluster
20:44 jkroon joined #gluster
21:09 vbellur joined #gluster
21:10 vbellur joined #gluster
21:10 vbellur1 joined #gluster
21:11 vbellur joined #gluster
21:12 vbellur joined #gluster
21:12 vbellur joined #gluster
21:30 vbellur joined #gluster
21:30 vbellur joined #gluster
21:31 R0ok_ joined #gluster
21:31 vbellur joined #gluster
21:33 vbellur joined #gluster
21:34 vbellur joined #gluster
21:35 vbellur joined #gluster
21:36 vbellur joined #gluster
21:36 vbellur joined #gluster
21:37 vbellur joined #gluster
21:39 kwt joined #gluster
21:40 vbellur joined #gluster
21:40 vbellur joined #gluster
21:41 vbellur joined #gluster
21:43 q1x joined #gluster
21:47 krink joined #gluster
21:48 shyam joined #gluster
21:50 vbellur joined #gluster
21:58 kwt left #gluster
21:58 kwt joined #gluster
22:05 kwt hey all, i'm having a pretty strange issue with gluster when trying to peer probe. probe is successful originally, but then after installing kubernetes, the probe times out without any packets leaving any interface
22:13 kwt from cli.log:
22:13 kwt [2017-04-03 20:50:51.978893] T [cli.c:273:cli_rpc_notify] 0-glusterfs: got RPC_CLNT_CONNECT [2017-04-03 20:50:51.978921] T [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_CONNECT [2017-04-03 20:50:51.978933] I [socket.c:2355:socket_event_handler] 0-transport: disconnecting now [2017-04-03 20:50:51.979110] T [rpc-clnt.c:1404:rpc_clnt_record] 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner: [2017-04-03 20:50:51
22:15 derjohn_mob joined #gluster
22:16 plarsen joined #gluster
22:19 cliluw joined #gluster
22:21 plarsen joined #gluster
22:29 plarsen joined #gluster
22:33 plarsen joined #gluster
22:39 plarsen joined #gluster
23:04 Gambit15 joined #gluster
23:17 Garogat joined #gluster
23:18 Garogat I am having two nodes running gluster and the fs is mounted to both of them locally. Which ports do I have to open?
23:18 Garogat Is it just 24007-24008?
23:20 vbellur joined #gluster
23:22 q1x joined #gluster
23:22 mlg9000 joined #gluster
23:33 Guest93 joined #gluster
23:39 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary