Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-10-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 Alghost joined #gluster
00:21 luizcpg joined #gluster
00:25 kramdoss_ joined #gluster
00:31 Alghost joined #gluster
00:45 harish_ joined #gluster
00:56 skoduri joined #gluster
01:21 k4n0 joined #gluster
01:29 shdeng joined #gluster
01:39 LiftedKilt joined #gluster
01:47 f0rpaxe joined #gluster
01:49 scubacuda_ joined #gluster
01:59 f0rpaxe joined #gluster
02:27 Javezim joined #gluster
02:28 haomaiwang joined #gluster
02:30 johnmilton joined #gluster
02:50 a2batic joined #gluster
02:59 Alghost joined #gluster
03:04 k4n0 joined #gluster
03:13 magrawal joined #gluster
03:14 Lee1092 joined #gluster
03:39 nbalacha joined #gluster
03:46 Alghost joined #gluster
03:48 hackman joined #gluster
03:49 RameshN joined #gluster
03:52 kramdoss_ joined #gluster
03:52 hgowtham joined #gluster
04:03 itisravi joined #gluster
04:19 atinm joined #gluster
04:30 shubhendu joined #gluster
04:33 jiffin joined #gluster
04:42 ankitraj joined #gluster
04:42 sanoj joined #gluster
04:43 Alghost_ joined #gluster
04:46 plarsen joined #gluster
04:49 Alghost joined #gluster
04:49 a2batic joined #gluster
04:51 rafi joined #gluster
05:01 ndarshan joined #gluster
05:05 karthik_us joined #gluster
05:06 ppai joined #gluster
05:08 ashiq joined #gluster
05:10 kotreshhr joined #gluster
05:12 kramdoss_ joined #gluster
05:15 Muthu_ joined #gluster
05:29 kotreshhr joined #gluster
05:30 anrao_ joined #gluster
05:36 aravindavk joined #gluster
05:43 anrao_ joined #gluster
05:48 [diablo] joined #gluster
05:48 anrao joined #gluster
05:48 kdhananjay joined #gluster
05:49 devyani7_ joined #gluster
05:51 devyani7 joined #gluster
05:52 karnan joined #gluster
05:59 karnan_ joined #gluster
06:01 Bhaskarakiran joined #gluster
06:01 satya4ever joined #gluster
06:04 itisravi joined #gluster
06:06 jtux joined #gluster
06:06 hchiramm joined #gluster
06:07 mhulsman joined #gluster
06:10 mhulsman1 joined #gluster
06:13 BitByteNybble110 joined #gluster
06:16 msvbhat joined #gluster
06:32 Muthu_ joined #gluster
06:38 level7 joined #gluster
06:39 hchiramm joined #gluster
06:41 jns_ joined #gluster
06:42 msvbhat joined #gluster
06:48 apandey joined #gluster
06:48 atrius joined #gluster
06:51 Debloper joined #gluster
06:53 derjohn_mob joined #gluster
06:53 ivan_rossi joined #gluster
06:57 hchiramm joined #gluster
07:04 Alghost_ joined #gluster
07:13 sac joined #gluster
07:16 cvstealt1 joined #gluster
07:17 jri joined #gluster
07:17 [o__o] joined #gluster
07:19 PatNarciso_ joined #gluster
07:20 side_control joined #gluster
07:22 unforgiven512 joined #gluster
07:23 Saravanakmr joined #gluster
07:32 fsimonce joined #gluster
07:33 kxseven joined #gluster
07:34 elastix joined #gluster
07:34 devyani7 joined #gluster
07:35 anrao_ joined #gluster
07:36 mhulsman joined #gluster
07:36 a2batic joined #gluster
07:43 hchiramm_ joined #gluster
07:43 derjohn_mob joined #gluster
07:53 atinm joined #gluster
08:03 jtux joined #gluster
08:04 Alghost joined #gluster
08:05 ankitraj joined #gluster
08:05 natarej_ joined #gluster
08:06 cloph_away joined #gluster
08:07 ankitraj joined #gluster
08:07 cloph_away joined #gluster
08:10 cloph_away joined #gluster
08:11 rafi2 joined #gluster
08:12 jri_ joined #gluster
08:18 jkroon joined #gluster
08:24 Slashman joined #gluster
08:25 harish_ joined #gluster
08:27 Alghost joined #gluster
08:39 mhulsman joined #gluster
08:45 PaulCuzner joined #gluster
08:49 bluenemo joined #gluster
09:01 anrao nigelb: around ?
09:02 derjohn_mob joined #gluster
09:08 Alghost_ joined #gluster
09:11 jri joined #gluster
09:20 deniszh joined #gluster
09:31 nohitall joined #gluster
09:31 nohitall am I right to assume that glusterfs is just an overlay on top of the underlying filesystem and the server just reads the metadata from .glusterfs ont he brick?
09:32 nohitall if I abandon glusterfs it is save to just use the brick and delete the .glusterfs directory correct?
09:37 ndevos nohitall: yes, as long as you do not use striped (is deprecated) or disperse volumes, or have sharding enabled
09:37 ndevos nohitall: the .glusterfs directory contains some meta-data, and files also get some additional extended attributes
09:38 post-factum nohitall, and in case you have distributed volume, you will get files on different bricks
09:39 kblin joined #gluster
09:39 kblin hi folks
09:42 nohitall post-factum: no it was a 2 replicate
09:42 nohitall so should be fine to use
09:42 nohitall unfortunately glusterfs is not working for our usecase so have no choice
09:43 anrao_ joined #gluster
09:50 kblin I've got a glusterfs cluster that I'd like to be able to access over the internet from another site. What's my best option to securely do this?
09:52 ppai joined #gluster
09:58 Gnomethrower joined #gluster
10:09 derjohn_mob joined #gluster
10:11 Alghost joined #gluster
10:12 nohitall kblin: kblin maybe webmin
10:12 ndevos kblin: I'd go with a VPN
10:12 nohitall from a physical site or a website?
10:13 nohitall ndevos: ssh would work too
10:13 kramdoss_ joined #gluster
10:13 nohitall kblin: explain in more detail what you want
10:14 ndevos nohitall: well, depends... mounting with sshfs might work, but I dont know how relyable that is
10:14 nohitall I think he wants vpn too yea,
10:16 ndevos a vpn would be the easiest, keep all the storage servers on private IPs, but still routable through the VPN, and it allows for encrypting all protocols
10:16 kblin nohitall: I've got some number-crunchy systems at two sites. One of them currently has all my data, and one of them is ~ 1000 km away and idles
10:16 kblin I'm using a glusterfs shared filesystem and a job queue to distribute compute jobs, and I'd like to hook the currently unused site up
10:17 nohitall you want to integrate the 1000km far away site into the glusterfs cluster?
10:18 ndevos kblin: what kind of access to the storage is that? read-once + write-once, big files?
10:18 kblin ndevos: big and small files, but basically it's read once, compute locally with local reads/writes, and then write back all the generated files
10:18 ndevos kblin: and what about latency between the two sites?
10:19 nohitall georeplication over that distance will pretty much cause huge performance loss
10:19 kblin haven't checked the latency yet, but a compute job takes a couple of days each, so I'm not too worried if writing takes me 30 seconds longer
10:20 ndevos kblin: local compute with temporary storage on local disks? that sounds like it should work just fine with a fuse or nfs mount
10:20 nohitall glusterfs also supports ssl, wouldnt require vpn
10:21 ndevos nohitall: ssl is tricky to configure correctly, setting up a vpn is probably easier
10:21 kblin nohitall: I'd like to avoid SSL overhead for local access to the glusterfs, I'm not sure that'd work
10:21 nohitall ndevos: true
10:21 nohitall actually I said something wrong, its not geo-replication you want
10:22 bfoster joined #gluster
10:22 kblin yeah, I know. geo-replication has read-only replicas
10:22 ndevos kblin: I'd configure access through autofs/automount on the remote site, probably with NFS so that writes are done once over the WAN with local replication/distribution on the site that has the storage servers
10:24 ndevos kblin: if you use NFS, you can theoretically setup encryption with Kerberos, if you use NFS-Ganesha - but a VPN will most likely be easier
10:24 nohitall yea lot of tutorials for openvpn for this kind of setup
10:26 decay joined #gluster
10:28 kblin I'm not sure I can get OpenVPN through the corporate firewall, but I guess I can use the corporate VPN instead :)
10:29 bfoster joined #gluster
10:36 mhulsman joined #gluster
10:38 msvbhat joined #gluster
10:49 hchiramm joined #gluster
10:55 johnmilton joined #gluster
10:57 mhulsman1 joined #gluster
11:08 msvbhat joined #gluster
11:13 buvanesh_kumar joined #gluster
11:26 atinm joined #gluster
11:29 side_control how do you guys update your gluster servers? each time i've done it, (updating one, waiting for it to come back online before updating the other), i always get a few files that just wont heal afterwards and it takes a long time
11:33 B21956 joined #gluster
11:34 side_control ugh, actually heal failed
11:36 Gnomethrower joined #gluster
11:41 mhulsman joined #gluster
11:41 B21956 joined #gluster
11:43 side_control should it take days to heal certain files?
11:46 itisravi side_control: Ideally, it shouldn't. Are these huge files?
11:47 side_control itisravi: yea, 256G VM images
11:47 ppai joined #gluster
11:48 itisravi side_control: ok, does gluster volume heal volname info show them as 'possibly undergoing heal' ?
11:48 side_control itisravi: yes
11:48 anrao_ joined #gluster
11:48 itisravi side_control: hmm then it is healing..you could perhaps change the self-heal algorithm to full and see if it becomes faster.
11:49 side_control itisravi: gluster volume heal $volname full ?
11:49 itisravi side_control: gluster volume set volname cluster.data-self-heal-algorithm full
11:50 itisravi side_control: For vm store use-cases, it is better to enable sharding on the volume.
11:51 side_control itisravi: yea, i always run into this problem each time i update gluster
11:51 kramdoss_ joined #gluster
11:55 ashiq joined #gluster
11:57 side_control itisravi: ok, enabled sharding, changed it to full, hit gluster volume heal $volname one more time and im just going to let it percolate
11:57 kkeithle Community Bug Triage in three minutes in #gluster-meeting
12:08 itisravi side_control: ugh! I should have been clearer. sharding only works on newly created files.
12:10 itisravi side_control: If you want to make it work for an existing file, copy the files out to somplace else, delete the file from the volume and copy it back into the volume so that the file gets sharded.
12:10 itisravi you might want to do this when you can afford to shutdfown your VM images.
12:16 riyas joined #gluster
12:30 skoduri joined #gluster
12:35 ashiq joined #gluster
12:42 side_control itisravi: understood
12:49 rafi joined #gluster
12:50 unclemarc joined #gluster
12:59 atinm joined #gluster
13:01 hagarth joined #gluster
13:02 rwheeler joined #gluster
13:07 rastar joined #gluster
13:08 anrao_ joined #gluster
13:12 mhulsman1 joined #gluster
13:12 ndevos oh, its turned off?
13:12 ndevos s/off/on/
13:12 glusterbot What ndevos meant to say was: oh, its turned on?
13:13 ndevos nigelb: ^ uh, nope, bug 1170966 is still alive
13:13 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1170966 medium, low, ---, joe, CLOSED WORKSFORME, glusterbot has a broken regex s/substituion/replacing/ command
13:15 ndevos but, on the other hand, it seems to work now
13:15 ndevos maybe it is because there was little chatter in here?
13:15 ndevos s/alive/dead/
13:15 glusterbot What ndevos meant to say was: nigelb: ^ uh, nope, bug 1170966 is still dead
13:16 ndevos I guess 'still' isnt really correct here
13:18 Sebbo2 joined #gluster
13:19 tom[] joined #gluster
13:20 robb_nl joined #gluster
13:23 robb_nl left #gluster
13:24 skoduri joined #gluster
13:29 omasek joined #gluster
13:29 tom[] joined #gluster
13:31 shyam joined #gluster
13:32 Saravanakmr joined #gluster
13:39 side_control when 'possibly undergoing heal', is it safe to remove the file and readd it?
13:40 nigelb ndevos: all good? :)
13:42 anrao_ joined #gluster
13:44 arc0 joined #gluster
13:45 skoduri joined #gluster
13:52 skylar joined #gluster
13:57 squizzi joined #gluster
13:58 msvbhat joined #gluster
14:07 nigelb ndevos: I thought the bug was that substitution returned an error.
14:07 nigelb It doesn't anymore.
14:07 nigelb So why did you re-open the bug?
14:11 krk joined #gluster
14:12 anrao_ joined #gluster
14:12 mhulsman joined #gluster
14:18 ankitraj joined #gluster
14:18 skoduri joined #gluster
14:26 skoduri joined #gluster
14:26 ankitraj joined #gluster
14:29 kkeithley joined #gluster
14:30 hagarth joined #gluster
14:31 jtux joined #gluster
14:37 TvL2386 joined #gluster
14:38 plarsen joined #gluster
14:39 deniszh1 joined #gluster
14:41 jvandewege joined #gluster
14:43 luizcpg joined #gluster
14:43 shaunm joined #gluster
14:43 farhorizon joined #gluster
14:44 anrao_ joined #gluster
14:51 Peanut Does glusterd emit any kind of dbus or systemd signal that can be used to determine that it's not only running, but actually available for mounting? I keep having the hardest time getting a proper systemd setup where I want a machine to mount one of its own volumes.
14:52 satya4ever joined #gluster
14:54 Philambdo1 joined #gluster
14:55 twisted` joined #gluster
14:59 kpease joined #gluster
14:59 ndevos nigelb: I reopened it because you mentioned that JoeJulian disabled the replacement function
14:59 ndevos s/replacement/substitution/
15:00 glusterbot What ndevos meant to say was: nigelb: I reopened it because you mentioned that JoeJulian disabled the substitution function
15:00 ndevos nigelb: and yeah, I think it works fine now, so the original problem reported has been fixed
15:00 ndevos s/fixed$/ (with an update?)/
15:01 glusterbot What ndevos meant to say was: nigelb: and yeah, I think it works fine now, so the original problem reported has been  (with an update?)
15:01 ndevos and so much for my regex/replacing :-/
15:01 Lee1092 joined #gluster
15:08 JoeJulian heh
15:09 JoeJulian glusterbot auto-updates nightly - because I'm lazy.
15:11 twisted` joined #gluster
15:12 wushudoin joined #gluster
15:12 kotreshhr joined #gluster
15:15 shyam joined #gluster
15:19 kdhananjay joined #gluster
15:20 jiffin joined #gluster
15:30 aravindavk joined #gluster
15:30 Gambit15 joined #gluster
15:31 shubhendu joined #gluster
15:41 Philambdo2 joined #gluster
15:47 anrao_ joined #gluster
15:55 arc0 joined #gluster
16:01 jiffin joined #gluster
16:02 hagarth joined #gluster
16:03 guhcampos joined #gluster
16:15 JoeJulian file a bug
16:15 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
16:19 hchiramm joined #gluster
16:20 shyam joined #gluster
16:28 Muthu|afk joined #gluster
16:29 derjohn_mob joined #gluster
16:58 farhorizon joined #gluster
17:08 kotreshhr left #gluster
17:11 kpease joined #gluster
17:11 derjohn_mob joined #gluster
17:26 Philambdo1 joined #gluster
17:31 mss joined #gluster
17:32 mss joined #gluster
17:33 cogsu joined #gluster
17:33 dataio_ joined #gluster
17:33 snila_ joined #gluster
17:33 ashka joined #gluster
17:33 ashka joined #gluster
17:33 amye joined #gluster
17:33 Jules- joined #gluster
17:33 nohitall joined #gluster
17:33 suliba joined #gluster
17:33 Peppard joined #gluster
17:34 JPaul joined #gluster
17:34 Trefex joined #gluster
17:35 nathwill joined #gluster
17:35 sadbox joined #gluster
17:36 hchiramm joined #gluster
17:37 samikshan joined #gluster
17:38 wiza joined #gluster
17:38 ackjewt joined #gluster
17:41 telius joined #gluster
17:41 RustyB joined #gluster
17:42 steveeJ joined #gluster
17:45 fyxim joined #gluster
17:48 primusinterpares joined #gluster
17:51 Muthu|afk joined #gluster
18:00 Philambdo joined #gluster
18:05 omasek joined #gluster
18:12 cliluw joined #gluster
18:39 wushudoin| joined #gluster
18:40 mhulsman joined #gluster
18:46 luizcpg joined #gluster
18:50 luizcpg joined #gluster
18:51 jobewan joined #gluster
18:52 farhorizon joined #gluster
19:09 skoduri joined #gluster
19:20 hchiramm joined #gluster
19:28 bluenemo joined #gluster
19:29 Philambdo1 joined #gluster
19:29 msvbhat joined #gluster
19:53 twisted` joined #gluster
19:58 arpu joined #gluster
20:21 PaulCuzner left #gluster
20:29 derjohn_mob joined #gluster
20:42 ndk_ joined #gluster
20:42 portante joined #gluster
21:00 farhorizon joined #gluster
21:00 farhorizon joined #gluster
21:18 Larsen_ joined #gluster
21:19 snehring joined #gluster
21:19 colm joined #gluster
21:22 LiftedKilt joined #gluster
21:22 nathwill joined #gluster
21:25 nathwill joined #gluster
21:51 nathwill joined #gluster
21:56 om joined #gluster
21:56 luizcpg joined #gluster
22:29 om joined #gluster
22:40 jeremyh joined #gluster
22:41 a2 joined #gluster
22:52 ttkg joined #gluster
23:09 om Sorry, JoeJulian did you get my last message?
23:10 om I might need to scroll up the irc to see if you replied...
23:11 om looks like the history is too short.... :(
23:14 om was about version 3.7.14 - when rebuilding a gluster server node, the gluster client does not connect to the other healthy nodes, and forcefully connects to the newly replaced node that is rebuilding the fs data... therefor only showing partial data that is replicating until replication is done.  Very opposite to expected functionality.  I had to turn to nfs-kernel server with single point of failure nfs server, disabling glusterfs and glusterfs provided nfs as w
23:14 om ell...  did you guys test this in the lab, see this bug and log a bug for dev patching?
23:21 hagarth joined #gluster
23:41 shyam joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary