Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 MrAbaddon joined #gluster
00:18 kramdoss_ joined #gluster
00:21 plarsen joined #gluster
00:23 MrAbaddon joined #gluster
00:34 Ned joined #gluster
00:35 Ned Hi Collegues - any gluster masters here to ask some questions?
00:35 Ned LateBirds=
00:46 Humble joined #gluster
00:48 dayne JoeJulian: ohh, i think your suggestion will work - just need to nail down if gluster uses a consistent set of ports for it's NFS activities.
00:50 MrAbaddon joined #gluster
01:04 shdeng joined #gluster
01:18 rastar joined #gluster
01:18 vbellur joined #gluster
01:29 masber joined #gluster
01:39 shdeng joined #gluster
01:45 raghu joined #gluster
01:45 shdeng joined #gluster
01:46 gem joined #gluster
01:53 masber joined #gluster
02:13 prasanth joined #gluster
02:14 rastar joined #gluster
02:26 vinurs joined #gluster
02:33 Kins joined #gluster
03:02 nthomas joined #gluster
03:03 kramdoss_ joined #gluster
03:12 rastar joined #gluster
03:22 magrawal joined #gluster
03:32 sbulage joined #gluster
03:34 blu_ joined #gluster
03:37 devyani7 joined #gluster
03:39 jkroon joined #gluster
03:48 masber joined #gluster
04:03 itisravi joined #gluster
04:07 ashiq joined #gluster
04:08 dominicpg joined #gluster
04:08 gyadav joined #gluster
04:13 msvbhat joined #gluster
04:14 niknakpaddywak joined #gluster
04:20 joshin left #gluster
04:34 jiffin joined #gluster
04:35 skumar joined #gluster
04:44 Shu6h3ndu joined #gluster
04:44 apandey joined #gluster
04:46 karthik_us joined #gluster
04:48 Vaelatern joined #gluster
04:49 buvanesh_kumar joined #gluster
05:03 ppai joined #gluster
05:10 susant joined #gluster
05:15 skoduri joined #gluster
05:18 ashiq joined #gluster
05:28 susant joined #gluster
05:29 ndarshan joined #gluster
05:44 Saravanakmr joined #gluster
05:44 armyriad joined #gluster
05:44 riyas joined #gluster
05:46 Humble joined #gluster
05:48 rastar joined #gluster
05:51 sanoj joined #gluster
05:54 skumar_ joined #gluster
05:58 buvanesh_kumar joined #gluster
05:58 arif-ali joined #gluster
05:58 [diablo] joined #gluster
05:59 Prasad joined #gluster
06:08 Wizek joined #gluster
06:09 Bardack_ joined #gluster
06:09 jerrcs_ joined #gluster
06:10 lkoranda_ joined #gluster
06:12 Plam joined #gluster
06:13 Champi joined #gluster
06:13 xMopxShell joined #gluster
06:13 karthik_us joined #gluster
06:13 kdhananjay joined #gluster
06:13 poornima_ joined #gluster
06:14 ahino joined #gluster
06:14 nthomas joined #gluster
06:14 rossdm joined #gluster
06:14 rossdm joined #gluster
06:15 hgowtham joined #gluster
06:15 john51 joined #gluster
06:22 masber joined #gluster
06:26 sona joined #gluster
06:27 apandey_ joined #gluster
06:38 ankitr joined #gluster
06:42 Karan joined #gluster
06:45 RameshN joined #gluster
06:48 ppai joined #gluster
06:51 Philambdo joined #gluster
06:52 apandey_ joined #gluster
07:05 squeakyneb joined #gluster
07:11 poornima_ joined #gluster
07:11 buvanesh_kumar joined #gluster
07:13 mhulsman joined #gluster
07:13 alvinstarr joined #gluster
07:16 mhulsman joined #gluster
07:23 jtux joined #gluster
07:24 sbulage joined #gluster
07:27 msvbhat joined #gluster
07:28 Anarka hey :) i have 4(1,2,3,4) servers with 2 volumes(a and b), 1a 2ab 3ab 4b. server 1,2 and 3 has volume A and servers 2,3,4 has volume b. the plan is that if server 3 and 4 goes down, volume A should still be up. should i use cluster.server-quorum-ratio: 51% in this case ?
07:34 jiffin Anarka: volume A should be fine, but volume b will be switched to read only
07:35 jiffin AFAIK by default the cluster.server-quorum-ration should set to 51%
07:35 Anarka jiffin: ok thanks, just making sure :)
07:37 ppai joined #gluster
07:41 jkroon joined #gluster
07:42 ashiq joined #gluster
07:45 msvbhat joined #gluster
07:58 ankitr joined #gluster
08:00 ivan_rossi joined #gluster
08:00 ivan_rossi left #gluster
08:03 ankitr joined #gluster
08:09 apandey_ joined #gluster
08:25 msvbhat joined #gluster
08:30 mbukatov joined #gluster
08:30 fsimonce joined #gluster
09:19 wellr00t3d joined #gluster
09:27 flying joined #gluster
09:32 wellr00t3d joined #gluster
09:36 wellr00t3d joined #gluster
09:38 ashiq joined #gluster
09:38 ppai joined #gluster
09:48 social joined #gluster
09:59 skoduri joined #gluster
10:13 yalu hi, I have a small glusterfs setup with 2 replicated bricks, and servers are mounted with host1:/share /mountpoint -o backupvolfile-server=host2 ; still when I reboot host1 it takes a considerable time for the files to become available again
10:17 MrAbaddon joined #gluster
10:24 loadtheacc joined #gluster
10:29 ghenry joined #gluster
10:29 ghenry joined #gluster
10:31 jtux joined #gluster
10:35 sahina joined #gluster
10:37 loadtheacc joined #gluster
10:42 kettlewell joined #gluster
10:47 loadtheacc joined #gluster
10:48 skoduri joined #gluster
11:07 msvbhat joined #gluster
11:10 Karan joined #gluster
11:13 Drankis joined #gluster
11:16 msvbhat joined #gluster
11:19 atinm joined #gluster
11:24 pdrakeweb joined #gluster
11:33 OtaKAR joined #gluster
11:33 masber joined #gluster
11:33 jiffin joined #gluster
11:35 Philambdo joined #gluster
11:36 susant left #gluster
11:38 ashiq joined #gluster
12:20 kpease joined #gluster
12:26 baber joined #gluster
12:26 mhulsman left #gluster
12:27 jwd joined #gluster
12:28 kpease joined #gluster
12:45 Karan joined #gluster
12:47 Karan joined #gluster
12:51 Karan joined #gluster
12:51 vbellur joined #gluster
12:53 vbellur joined #gluster
12:53 vbellur1 joined #gluster
12:54 vbellur joined #gluster
12:55 vbellur joined #gluster
12:56 vbellur1 joined #gluster
12:57 msvbhat joined #gluster
13:00 vbellur joined #gluster
13:08 ashiq joined #gluster
13:10 Seth_Karlo joined #gluster
13:10 armyriad joined #gluster
13:14 Wizek joined #gluster
13:17 Netforce01 joined #gluster
13:27 Karan joined #gluster
13:30 jiffin1 joined #gluster
13:31 Karan joined #gluster
13:38 ashiq joined #gluster
13:42 skylar joined #gluster
13:50 sahina joined #gluster
13:54 hybrid512 joined #gluster
14:09 plarsen joined #gluster
14:13 rastar joined #gluster
14:18 shirwa joined #gluster
14:20 koma joined #gluster
14:20 shirwa Hi All, I would like to migrate under laying brick disks to newer disks. Can some some please advice me best brick disk options?
14:21 shirwa *Can some some please advice how to migrate brick disks to newer disks?
14:23 vbellur1 joined #gluster
14:23 vbellur1 joined #gluster
14:24 vbellur1 joined #gluster
14:25 vbellur joined #gluster
14:25 vbellur joined #gluster
14:26 vbellur1 joined #gluster
14:26 vbellur joined #gluster
14:39 raghu joined #gluster
14:40 Philambdo joined #gluster
14:41 Guest78794 hum, is there a way to disable autogeneration of samba shares? it's not so secure and I still need to change manually a lot of things
14:43 gyadav joined #gluster
14:43 ankitr joined #gluster
14:46 farhorizon joined #gluster
14:47 vbellur joined #gluster
14:48 Guest78794 ok it's option user.smb, but it's not in the list...
14:52 bennyturns joined #gluster
14:53 bennyturns Hey all!  The RH gluster 3.2 launch event is coming up in about an hour - https://tinyurl.com/lmyfp8j
14:53 glusterbot Title: Introducing Red Hat Gluster Storage 3.2 A Red Hat launch event Registration (at tinyurl.com)
14:53 bennyturns I'll be there answering questions so if anyone has any questions on RHGS 3.2 or gluster / gluster performance join and shoot me your questions over chat
14:54 * bennyturns hopes to see some of ya'll there!
14:55 * bennyturns knows it a RH thing but it will cover alot of what went into the the latest / recent upstream releases
15:01 armyriad joined #gluster
15:16 baber joined #gluster
15:23 raghu joined #gluster
15:26 skumar_ joined #gluster
15:36 rastar joined #gluster
15:40 msvbhat joined #gluster
15:47 vbellur joined #gluster
15:53 vbellur joined #gluster
15:55 jbrooks joined #gluster
15:55 jiffin joined #gluster
16:05 baber joined #gluster
16:05 gem joined #gluster
16:15 rastar joined #gluster
16:18 susant joined #gluster
16:18 vbellur joined #gluster
16:24 Gambit15 joined #gluster
16:24 vbellur joined #gluster
16:24 vbellur joined #gluster
16:25 vbellur joined #gluster
16:26 vbellur joined #gluster
16:26 vbellur joined #gluster
16:27 vbellur joined #gluster
16:39 susant joined #gluster
16:42 Seth_Karlo joined #gluster
16:46 percevalbot joined #gluster
16:47 jiffin1 joined #gluster
16:52 jiffin1 joined #gluster
17:00 jiffin joined #gluster
17:00 mallorn Confirmed that with 3.10 the host referenced for the fuse mount can't go down, or all mounting hosts lose their connections.  We mount controller:nova on /var/lib/nova/instances, but if glusterd is stopped on the system named 'controller' then the filesystem unmounts on every client.
17:01 bennyturns mallorn, there is a backupvol mount option
17:01 bennyturns mallorn, when you set that it connects to the other nodes when the mounted node goes down
17:03 bennyturns Mounting Options
17:03 bennyturns You can specify the following options when using the mount -t glusterfs command. Note that you need to separate all options with commas.
17:03 bennyturns backupvolfile-server=server name - name of the backup volfile server to mount the client. If this option is added while mounting fuse client, when the first volfile server fails, then the server specified in backupvolfile-server option is used as volfile server to mount the client.
17:03 bennyturns mallorn, ^^^
17:04 bennyturns you can have multiple hostnames there for multiple backup vol systems
17:05 AlanH joined #gluster
17:05 AlanH hi all, maybe a stupid and newbie question, but if I do an apt-get update patch version update on glusterfs, will it cause downtime for the currently running server?
17:18 mallorn Reporting it for major.  Looks like there's now a bug for it:
17:18 mallorn https://bugzilla.redhat.com/show_bug.cgi?id=1434617
17:18 glusterbot Bug 1434617: unspecified, unspecified, ---, bugs, CLOSED DUPLICATE, mounts fail to remain connected if the mount server is brought down
17:20 mallorn backupvolfile-server does seem to mitigate it, though.
17:25 jiffin joined #gluster
17:26 major yah .. think Julian opened up that yesterday
17:27 major better to open it so it can get tracked and validated
17:27 major is good stuff .. I was busy distracting myself w/ gitrack :P
17:33 mallorn Thank you!
17:34 msvbhat joined #gluster
17:34 jbrooks joined #gluster
17:43 mallorn Now to figure out why I get weird DNS lookups.
17:44 JoeJulian mallorn: Please weigh in on but 1434412 if you have an opinion.
17:44 JoeJulian s/but/bug
17:44 JoeJulian bug 1434412
17:44 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1434412 urgent, unspecified, ---, bugs, NEW , Brick Multiplexing:Volume gets unmounted when glusterd is restarted
17:53 shaunm joined #gluster
17:54 mallorn Removing /var/lib/glusterd and recreating it removed my DNS problems.  I can't find anything in the original /var/lib/glusterd that would cause those problems, but it's gone now.  (I had the system doing DNS lookups on these "hostnames" (sanitized)):
17:54 mallorn /var/run/glusterd.socket.example.edu /var/run/glusterd.socket.os.example.edu /var/run/glusterd.socket
18:08 vbellur joined #gluster
18:09 farhorizon joined #gluster
18:09 vbellur joined #gluster
18:10 vbellur joined #gluster
18:11 vbellur joined #gluster
18:12 vbellur joined #gluster
18:13 vbellur joined #gluster
18:14 vbellur joined #gluster
18:14 Wizek joined #gluster
18:15 vbellur joined #gluster
18:17 vbellur joined #gluster
18:18 vbellur joined #gluster
18:18 vbellur joined #gluster
18:20 vbellur joined #gluster
18:20 vbellur joined #gluster
18:22 vbellur joined #gluster
18:22 theron joined #gluster
18:22 vbellur joined #gluster
18:23 vbellur joined #gluster
18:25 vbellur joined #gluster
18:27 vbellur joined #gluster
18:30 msvbhat joined #gluster
18:30 vbellur joined #gluster
18:30 buvanesh_kumar joined #gluster
18:31 vbellur joined #gluster
18:31 vbellur joined #gluster
18:32 vbellur joined #gluster
18:34 vbellur joined #gluster
18:34 vbellur joined #gluster
18:36 vbellur joined #gluster
18:37 vbellur joined #gluster
18:38 arpu joined #gluster
18:38 vbellur joined #gluster
18:39 jkroon joined #gluster
18:53 MrAbaddon joined #gluster
18:55 vbellur joined #gluster
19:16 MrAbaddon joined #gluster
19:36 rafi joined #gluster
19:42 baber joined #gluster
19:58 mallorn Happy to report that our 3.10 upgrade is complete and successful.  We had some serious problems before because we were upgrading our disperse-distributed servers to 3.9; halfway through 3.10 came out, but we kept installing 3.9.  Unfortuntately, the upgrade was scripted and CentOS redirected 3.9 requests to 3.10.  That left us with some 3.9 and some 3.10 servers in a rolling upgrade.
19:58 mallorn But now that everything is at 3.10 we're fully recovered.
19:59 mallorn The moral of the story for us is not to run
19:59 mallorn yum install -y centos-release-gluster39
19:59 mallorn and expect to get 3.9.
20:20 rastar joined #gluster
20:26 vbellur joined #gluster
20:34 JoeJulian Sounds like a sig fail.
20:41 vbellur joined #gluster
20:49 raghu joined #gluster
21:09 major seriously..
21:34 rafi joined #gluster
21:36 guhcampos joined #gluster
22:30 owlbot joined #gluster
22:31 percevalbot joined #gluster
22:33 theron joined #gluster
22:34 theron left #gluster
22:41 gully-foyle joined #gluster
22:45 major and .. now I just need a testing framework to test git-track .....
22:45 major and then I can get back to writing C code
23:02 JoeJulian Heh
23:03 major calling it 'meh' .. Major's Eleventh Hour testing framework
23:03 major tired of systems that can't handle parallization and test dependancies
23:03 JoeJulian I did something similar this week. I scripted up some manual steps I perform while I'm working on k2, heard someone else asking how to do what I'm doing so I added my script to the repo. Spent the last two days fielding issues about that script for OSX because it's crap.
23:04 JoeJulian (OSX, not my script).
23:04 major lol
23:04 JoeJulian Just run linux!
23:05 major anyway .. I am right about 500 lines into this thing ..
23:05 loadtheacc joined #gluster
23:05 major fully supports test dependancies and parallization .. and its all in POSIX shell .. and executes cleanly on *BSD, BusyBox, Dash, etc..
23:05 major because .. why not
23:28 bennyturns joined #gluster
23:44 loadtheacc joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary