Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 jbrooks_ joined #gluster
00:33 calavera joined #gluster
00:34 topshare joined #gluster
00:38 topshare joined #gluster
00:44 kovshenin joined #gluster
01:07 mpietersen joined #gluster
01:10 topshare joined #gluster
01:11 _dist joined #gluster
01:25 Lee1092 joined #gluster
01:28 mpieters_ joined #gluster
01:49 harish joined #gluster
01:58 calavera joined #gluster
02:03 julim joined #gluster
02:13 an joined #gluster
02:23 DV joined #gluster
02:24 nangthang joined #gluster
02:27 prg3 joined #gluster
02:27 an joined #gluster
02:34 kovshenin joined #gluster
02:44 sadbox joined #gluster
03:07 bharata-rao joined #gluster
03:09 nangthang joined #gluster
03:12 vmallika joined #gluster
03:13 overclk joined #gluster
03:14 TheSeven joined #gluster
03:27 an joined #gluster
03:40 shubhendu joined #gluster
03:54 atinm joined #gluster
03:56 dusmant joined #gluster
04:02 itisravi joined #gluster
04:04 RameshN joined #gluster
04:04 kanagaraj joined #gluster
04:19 gildub joined #gluster
04:29 glusterbot News from newglusterbugs: [Bug 1245425] IFS is not set back after used as "[" in log_newer function of include.rc <https://bugzilla.redhat.com/show_bug.cgi?id=1245425>
04:30 jwd joined #gluster
04:40 yazhini joined #gluster
04:44 vimal joined #gluster
04:47 schandra joined #gluster
04:48 meghanam joined #gluster
04:50 ramteid joined #gluster
04:59 hchiramm joined #gluster
05:05 sakshi joined #gluster
05:05 pppp joined #gluster
05:06 hgowtham joined #gluster
05:08 Manikandan joined #gluster
05:08 ashiq joined #gluster
05:11 ppai joined #gluster
05:16 spandit joined #gluster
05:17 jiffin joined #gluster
05:17 deepakcs joined #gluster
05:17 rafi joined #gluster
05:18 kovshenin joined #gluster
05:18 ndarshan joined #gluster
05:19 vmallika joined #gluster
05:21 rafi1 joined #gluster
05:24 gem joined #gluster
05:27 Philambdo joined #gluster
05:30 Bhaskarakiran joined #gluster
05:36 an joined #gluster
05:38 dusmant joined #gluster
05:43 surabhi joined #gluster
05:48 an joined #gluster
05:52 kdhananjay joined #gluster
05:52 maveric_amitc_ joined #gluster
06:00 gem joined #gluster
06:02 Manikandan joined #gluster
06:05 jtux joined #gluster
06:08 raghu joined #gluster
06:10 soumya_ joined #gluster
06:10 doekia joined #gluster
06:14 topshare joined #gluster
06:15 nbalacha joined #gluster
06:15 jwd joined #gluster
06:16 jwd joined #gluster
06:17 RameshN joined #gluster
06:23 jiffin1 joined #gluster
06:25 meghanam joined #gluster
06:26 soumya joined #gluster
06:30 gem joined #gluster
06:31 karnan joined #gluster
06:31 Manikandan joined #gluster
06:34 an joined #gluster
06:39 ppai joined #gluster
06:47 jwaibel joined #gluster
06:49 kshlm joined #gluster
06:52 atalur joined #gluster
07:04 anil_ joined #gluster
07:07 atinm joined #gluster
07:13 [Enrico] joined #gluster
07:13 deepakcs joined #gluster
07:16 an joined #gluster
07:17 jiffin1 joined #gluster
07:21 soumya_ joined #gluster
07:26 rtalur joined #gluster
07:28 paraenggu joined #gluster
07:29 paraenggu left #gluster
07:29 Philambdo joined #gluster
07:32 soumya joined #gluster
07:37 atinm joined #gluster
07:41 rtalur joined #gluster
07:47 topshare joined #gluster
07:47 Pupeno joined #gluster
07:49 Pupeno joined #gluster
07:58 anti[Enrico] joined #gluster
07:59 ppai joined #gluster
07:59 ctria joined #gluster
08:17 aravindavk joined #gluster
08:21 hchiramm_ joined #gluster
08:38 mihaillive joined #gluster
08:39 FlOp joined #gluster
08:40 mihaillive left #gluster
08:40 FlOp Hi all!
08:40 Guest78908 I’ve one question:
08:41 Guest78908 Can Gluster 3.6 be considered as stable and used in production?
08:43 meghanam joined #gluster
08:45 Guest78908 anyone here? ;-)
08:48 jwaibel morning
08:48 rtalur joined #gluster
08:49 shubhendu joined #gluster
08:49 dusmant joined #gluster
08:50 ndevos Guest78908: sure, many users use 3.6 in production
08:51 Guest78908 Okay, thats great :-)
08:52 Guest78908 Can you tell me what happens in a replicated volume with two bricks when they’re not the same size?
08:52 Guest78908 I assume the usable volume would be the smaller one, right?
08:57 an joined #gluster
09:05 rtalur joined #gluster
09:08 Saravana_ joined #gluster
09:08 ppai joined #gluster
09:08 Slashman joined #gluster
09:10 ndevos Guest78908: not sure what happens, I think calculations would get wonky, things are supposed to work when one brick is offline
09:10 Manikandan joined #gluster
09:10 ndevos Guest78908: when one brick is offline, it does not know the smallest size, and would return the size of the online brick
09:10 arcolife joined #gluster
09:11 ndevos Guest78908: its really advised to use bricks of the same size in a replica set, I would use partitions or LVM to do that
09:13 Guest78908 but in a distributed replicated volume I can use bricks of different sizes as long as the replicas are of the same size?
09:13 schandra joined #gluster
09:16 ndevos yes, you can, its not advised, but that should not run into technical issues
09:17 Guest78908 So maybe you can say whether the actual plan we have would be working:
09:19 Guest78908 We have some workstations (about 50) which all have some space (between 150GB and 2TB) left on their local harddrives.
09:19 Guest78908 Currently these partitions are exported via NFS (one export per workstation).
09:20 [Enrico] joined #gluster
09:21 schandra hchiramm_: there
09:21 Guest78908 The idea is to cluster the HDDs (and, for data protection, have some replication), so I thought of a distributed replicated volume with relication factor 2.
09:22 Guest78908 Would this work as desired if I add the bricks in pairs of equal size?
09:23 shubhendu joined #gluster
09:23 Guest78908 And is it a (security) problem if the users are working locally on the gluster servers (of course using the volume via gluster mount)?
09:25 dusmant joined #gluster
09:26 ndevos it should work, its just worrysome that you say "workstations" because those are not servers that are always on
09:27 Guest78908 oh, our workstations should run 24/7, because they’re used for numerical calculations
09:27 Manikandan joined #gluster
09:27 ndevos and, if the users run their work on local disks now, they will now start using network storage, and that probably is slower
09:28 sakshi joined #gluster
09:30 Guest78908 the short times for reboots every once in a while shouldn’t be a problem for gluster, as long as not both servers of a pair of replicated bricks are offline at the same time, right?
09:31 Guest78908 actually they’re also using the network right now because they sometimes write on exported partitions on other machines
09:32 ndevos a reboot might introduce a delay when a client tries to connect to a brick that is offline
09:33 Guest78908 delay until the connection switches to the replicated brick? Or until the brick is online again?
09:35 ndevos delay until the ,,(ping-timeout) accepts the disconnected brick is gone
09:35 glusterbot The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
09:38 Guest78908 Ah, okay. Sounds logical. :-)
09:40 Guest78908 So there is no big mistake in my plans that will go horribly wrong?
09:43 paraenggu joined #gluster
09:46 DV__ joined #gluster
09:49 DV__ joined #gluster
09:50 hchiramm_ schandra, yes
09:51 hchiramm_ got into couple of meetings in between
09:58 Guest78908 afk
09:59 ppai joined #gluster
10:00 glusterbot News from newglusterbugs: [Bug 1065651] gluster vol quota dist-vol list is not displaying quota informatio. <https://bugzilla.redhat.com/show_bug.cgi?id=1065651>
10:05 an joined #gluster
10:17 maveric_amitc_ joined #gluster
10:22 hchiramm_ rtalur++ kshlm++ ppai++ thanks !
10:22 glusterbot hchiramm_: rtalur's karma is now 1
10:22 glusterbot hchiramm_: kshlm's karma is now 2
10:22 glusterbot hchiramm_: ppai's karma is now 2
10:28 shubhendu_ joined #gluster
10:29 shubhendu joined #gluster
10:29 cleong joined #gluster
10:32 itisravi joined #gluster
10:33 glusterbot News from resolvedglusterbugs: [Bug 1215173] Disperse volume: rebalance and quotad crashed <https://bugzilla.redhat.com/show_bug.cgi?id=1215173>
10:33 glusterbot News from resolvedglusterbugs: [Bug 1217386] Crash in dht_getxattr_cbk <https://bugzilla.redhat.com/show_bug.cgi?id=1217386>
10:33 sakshi joined #gluster
10:36 shubhendu joined #gluster
10:46 harish_ joined #gluster
10:52 ira joined #gluster
10:54 dusmant joined #gluster
10:55 Guest78908 @ndevos: thanks for your answers!
10:55 Guest78908 helped a lot :-)
10:55 Guest78908 bye
11:00 ekuric joined #gluster
11:04 gildub joined #gluster
11:06 LebedevRI joined #gluster
11:07 dusmant joined #gluster
11:11 ppai joined #gluster
11:21 nsoffer joined #gluster
11:27 aaronott joined #gluster
11:29 rafi joined #gluster
11:30 julim joined #gluster
11:35 shyam joined #gluster
11:37 kotreshhr joined #gluster
11:41 rafi joined #gluster
11:43 gildub joined #gluster
11:48 ramky joined #gluster
11:49 rafi joined #gluster
11:49 overclk joined #gluster
11:54 klaxa|work joined #gluster
11:54 natarej joined #gluster
12:07 shyam The gluster weekly community meeting has started at #gluster-meeting.
12:07 atinm REMINDER: gluster vommunity meeting has just begun at #gluster-meeting
12:08 B21956 joined #gluster
12:09 pdrakeweb joined #gluster
12:14 jtux joined #gluster
12:17 ashka joined #gluster
12:18 enzob joined #gluster
12:19 unclemarc joined #gluster
12:20 ashka hi, I have an issue setting up a gluster volume on a private network. both the server and the client have 2 public interfaces : server is 1.2.3.4, client is 5.6.7.8 on the public, server is 10.0.0.1, and client is 10.0.0.2 on the private. I have auth.allow 10.* on the volume. However when I try to mount from the client with mount.glusterfs 10.0.0.1:/my-volume it gives an authentication error. For it to work I have to auth.allow 5.6.7.8 and
12:20 ashka mount.glusterfs 5.6.7.8:/my-volume, if I do that then it will mount, open a link 1.2.3.4:49152 -> 5.6.7.8:1017 and 10.0.0.1:24007 -> 10.0.0.2:1017. How can I have it to mount exclusively on the private network?
12:23 tanuck joined #gluster
12:27 dusmant joined #gluster
12:29 chirino joined #gluster
12:32 enzob joined #gluster
12:35 kotreshhr left #gluster
12:43 kxseven joined #gluster
12:49 dusmant joined #gluster
12:52 kdhananjay joined #gluster
12:53 an joined #gluster
12:55 jcastillo joined #gluster
13:06 samikshan joined #gluster
13:08 overclk joined #gluster
13:08 anti[Enrico] joined #gluster
13:13 mpietersen joined #gluster
13:14 paraenggu left #gluster
13:16 uebera|| joined #gluster
13:22 soumya joined #gluster
13:24 overclk joined #gluster
13:29 gildub joined #gluster
13:30 georgeh-LT2 joined #gluster
13:32 edong23 joined #gluster
13:32 tquinn joined #gluster
13:38 yoavz joined #gluster
13:41 spcmastertim joined #gluster
13:44 Twistedgrim joined #gluster
13:49 overclk joined #gluster
13:51 rwheeler joined #gluster
13:53 shubhendu joined #gluster
13:54 dgandhi joined #gluster
13:54 RameshN joined #gluster
13:56 sankarshan_ joined #gluster
14:00 overclk joined #gluster
14:01 sahina joined #gluster
14:01 glusterbot News from newglusterbugs: [Bug 1245667] epoll: replace epoll_create() with epoll_create1() <https://bugzilla.redhat.com/show_bug.cgi?id=1245667>
14:02 shyam joined #gluster
14:18 cyberswat joined #gluster
14:25 jcastill1 joined #gluster
14:25 haomaiwa_ joined #gluster
14:27 RameshN joined #gluster
14:27 wushudoin| joined #gluster
14:28 nbalacha joined #gluster
14:29 cyberswat I had to replace a brick in a replicated volume and am checking it for consistency.  All of the non .glusterfs files seem to be in place. Is it unusual to have a replicated volume with a much higher number of files in the .glusterfs folder of brick1 than on brick2?  Is this something I should be concerned about?
14:32 jmarley joined #gluster
14:34 bennyturns joined #gluster
14:37 overclk joined #gluster
14:40 overclk_ joined #gluster
14:42 jcastillo joined #gluster
14:43 ajames-41678 joined #gluster
14:45 jwd joined #gluster
14:47 lpabon joined #gluster
14:49 rwheeler joined #gluster
14:58 [Enrico] joined #gluster
15:01 haomaiwa_ joined #gluster
15:05 hamiller joined #gluster
15:10 skoduri joined #gluster
15:16 obnox joined #gluster
15:17 bennyturns joined #gluster
15:19 haomaiwa_ joined #gluster
15:19 vmallika joined #gluster
15:20 bennyturns joined #gluster
15:23 javi404 joined #gluster
15:24 shyam joined #gluster
15:25 cholcombe joined #gluster
15:30 siel joined #gluster
15:31 jmarley joined #gluster
15:33 virusuy joined #gluster
15:36 skoduri joined #gluster
15:38 _maserati joined #gluster
15:41 overclk joined #gluster
15:46 chirino joined #gluster
15:48 an joined #gluster
15:49 jwd joined #gluster
15:51 jwaibel joined #gluster
15:54 jdossey joined #gluster
15:56 chirino joined #gluster
15:57 mkzero joined #gluster
15:58 JoeJulian cyberswat: It doesn't surprise me. Any file that's created and later removed could create directories under .glusterfs. When the file is removed, the gfid hardlink is removed, but the directory structure wouldn't be.
15:58 hchiramm_ joined #gluster
15:58 JoeJulian The self-heal will only create the .glusterfs directories that are currently needed for the gfid hardlinks.
15:58 CyrilPeponnet joined #gluster
16:00 deepakcs joined #gluster
16:01 cabillman joined #gluster
16:04 _dist joined #gluster
16:06 elico joined #gluster
16:07 rafi joined #gluster
16:11 cyberswat JoeJulian: That makes sense and helps me understand what I'm seeing.
16:11 cyberswat thanks
16:14 plarsen joined #gluster
16:17 DV joined #gluster
16:17 XpineX joined #gluster
16:21 maveric_amitc_ joined #gluster
16:21 ramky joined #gluster
16:32 _maserati stupid question, sorry, how do i check which gluster server version im running?
16:33 JoeJulian _maserati: gluster --version
16:34 calavera joined #gluster
16:34 _maserati >.<
16:34 JoeJulian :D
16:36 jiffin joined #gluster
16:36 chirino joined #gluster
16:38 an joined #gluster
16:41 overclk joined #gluster
16:43 _maserati JoeJulian, I'm not sure why I didn't just try that. I just remember a few weeks ago I had to do the samething and I ended up finding a damn grep script taring apart config files to find the version number..... stupid
16:43 _maserati Must have been a government employee
16:49 shyam joined #gluster
17:00 overclk joined #gluster
17:01 pppp joined #gluster
17:01 gem joined #gluster
17:06 jmarley joined #gluster
17:10 overclk joined #gluster
17:12 jcastill1 joined #gluster
17:15 cholcombe joined #gluster
17:17 jcastillo joined #gluster
17:21 JoeJulian lol
17:26 _Bryan_ joined #gluster
17:28 swebb joined #gluster
17:36 an joined #gluster
17:37 _dist JoeJulian: I was wondering if changing the networking timeout for a volume only affects clients after they make a new connection, or if it works on existing open files
17:37 Rapture joined #gluster
17:38 _dist referring to your article about ext4 RO
17:41 JonathanS joined #gluster
17:52 Peppard joined #gluster
17:53 btspce joined #gluster
18:02 glusterbot News from newglusterbugs: [Bug 1245775] fuse-bridge.c fuse_err_cbk error messages in client log <https://bugzilla.redhat.com/show_bug.cgi?id=1245775>
18:11 calisto joined #gluster
18:13 ghenry joined #gluster
18:25 vimal joined #gluster
18:30 _maserati Where has Romeor been? He usually lights this channel up
18:31 _maserati probably too much of that Argentinian wine
18:32 jmarley joined #gluster
18:33 _maserati JoeJulian, can you think of any reason why the end-point node of a geo-sync setup is holding triple the amount of data as my actual replicated nodes?
18:39 rotbeard joined #gluster
18:49 wushudoin| joined #gluster
18:53 mpietersen joined #gluster
18:54 wushudoin| joined #gluster
18:56 RedW joined #gluster
18:58 RedW joined #gluster
19:08 JoeJulian _dist: ping-timeout changes all existing connections. Changing it is not my recommendation (I should probably edit that blog post to make that more clear).
19:09 JoeJulian _maserati: sparseness?
19:09 _maserati JoeJulian, none of the files we store are sparse
19:09 _dist JoeJulian: the implementation of gluster we have on debian appears not to finish the connection properly at service stop time. I'm not sure why, but I coincidentally have a machine that cannot ever be reset or it breaks entirely (yes I know insane that's being fixed). However, in the meantime any brick that I take down itnetiaonlly for maint causes the 42 second wait
19:10 JoeJulian _dist: make debian stop stopping the network. I have no idea why they do that.
19:10 jcastill1 joined #gluster
19:11 _dist JoeJulian: are you aware of a specific way I can take gluster down on debian that will not cause this problem? Is debian taking the network down before gluster can send something like a tcp-fin?
19:15 _dist https://dpaste.de/vKnd this is the stop in /etc/init.d on my installation. clients don't appear to drop their connection after glusterfs-server is stopped. However, shutting down the machine, they don't like that! :)
19:16 jcastillo joined #gluster
19:43 cyberswat joined #gluster
19:44 JoeJulian _dist: correct, debian appears to be taking the network down before fin.
19:44 JoeJulian If you stop glusterfs-server before shutting down, everyone should be happy.
19:44 JoeJulian ERmmm
19:44 JoeJulian no
19:44 JoeJulian Just pkill -f gluster
19:45 JoeJulian Stopping glusterfs-server only stops glusterd (as it should be).
19:50 _dist JoeJulian: thanks, I'll give that a try next time. My assumption that service glusterfs-server stop would work was likely the problem
19:51 _dist so really, /etc/init.d/glusterfs-server should be fixed in debian
19:55 JoeJulian I disagree
19:56 JoeJulian Why is debian the only distro that stops the network before killing apps?
19:57 Romeor _maserati,ye, thats why i've got +v :D i'm on my vacation atm :)so can't really light something up :( . Helping my parents, changing electrical cables and systems at their house 300 km far of my home :D
19:57 JoeJulian ... and he thinks that's a valid excuse...
19:57 _maserati ^
19:57 Romeor :D:D
19:58 _maserati im actually not too well versed in IRC, how does one +v
19:59 Romeor actually i've just beg it from JoeJulian
19:59 Romeor begged*
20:01 mpietersen joined #gluster
20:04 _dist JoeJulian: no idea, I never expected it would but it explains my pain.
20:06 Romeor _dist, what debian version is used?
20:10 Romeor ah, i'm on my shitter now (sry for that, but thats the only time i've here :D) and going out.. so i'll write some thoughts... if u've got debian 8, than it has changed it's init system. mby packages still being built for wheezy?
20:11 Romeor i'm using deb7, but i always stop glusterd manually before restarting the system, so can't help much
20:11 Romeor even more.. first, i stop volumes manually
20:11 Romeor then guluster and then retart
20:12 Romeor you never know how stuff really wokr if you don't read a lot of mans and even then something would work not as expected sometimes.. so i prefer to safe my time :D
20:13 _dist Romeor: 7 right now, upgrade is coming soon. I'm going to confirm later this week if the problem I'm seeing is caused by what joejulian and I suspect
20:20 calisto joined #gluster
20:25 Romeor _dist, it could be not debian's problem but the guy who makes .debs.
20:26 JoeJulian I blame all of debian.
20:26 Romeor .deb are made by volunteers
20:26 JoeJulian I know... I still blame them all.
20:26 Romeor and there is a missing a step after upgrade from 3.5 to 3.6
20:26 _dist well sure, but it's from the debian repo, so it sort of is unless you agree with JoeJulian that it should terminate the network near last. I mean no-one drops out the hard drives before the FS
20:26 Romeor only on .deb
20:27 JoeJulian It should not terminate the network.
20:27 JoeJulian There's no need.
20:30 gildub joined #gluster
20:31 victori joined #gluster
20:51 chirino_m joined #gluster
20:56 badone joined #gluster
21:07 Slashman joined #gluster
21:14 shyam joined #gluster
21:16 victori joined #gluster
21:32 shyam joined #gluster
21:42 nsoffer joined #gluster
22:05 jbautista- joined #gluster
22:10 jbautista- joined #gluster
22:17 tquinn joined #gluster
22:17 spcmastertim joined #gluster
22:20 mator joined #gluster
22:29 shyam joined #gluster
22:50 cleong joined #gluster
22:53 rcampbel3 joined #gluster
22:53 rcampbel3 Hi, doing upgrade... glusterfs stopped, glusterd still running. On ubuntu 14.04... what's the "proper" way to stop and restart glusterd?
22:56 rcampbel3 I don't have an /etc/init.d/glusterfs-server right now. I see skeleton ones from the package. Do I just kill glusterd and does 'gluster volume start volname' start all the required daemons?
22:57 edwardm61 joined #gluster
22:58 JoeJulian rcampbel3: you can just pkill -f gluster
22:59 rcampbel3 thanks. @joejulian, and interestingly enough, gluster-3.7 created /etc/init.d/glusterfs-server
22:59 JoeJulian bummer.. no systemd huh?
23:03 rcampbel3 systemd wasn't default in ubuntu until 15.04
23:13 jcastill1 joined #gluster
23:18 jcastillo joined #gluster
23:21 elico left #gluster
23:33 cyberswat joined #gluster
23:35 gildub joined #gluster
23:47 ajames-41678 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary