Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-06-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 plarsen joined #gluster
00:07 Alghost joined #gluster
00:07 Alghost joined #gluster
00:15 rastar joined #gluster
00:27 rafi1 joined #gluster
00:39 victori joined #gluster
01:15 jbrooks joined #gluster
01:19 fsimonce joined #gluster
01:27 daMaestro joined #gluster
01:28 kramdoss_ joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:58 jbrooks joined #gluster
02:24 kpease joined #gluster
02:56 nbalacha joined #gluster
02:58 shyam joined #gluster
03:39 vbellur joined #gluster
03:43 vbellur joined #gluster
03:44 vbellur joined #gluster
03:45 riyas joined #gluster
03:47 vbellur joined #gluster
03:48 vbellur joined #gluster
03:49 vbellur joined #gluster
04:08 gyadav joined #gluster
04:16 buvanesh_kumar joined #gluster
04:17 vbellur joined #gluster
04:18 vbellur joined #gluster
04:20 atinm joined #gluster
04:25 ppai joined #gluster
04:37 itisravi joined #gluster
04:43 ankitr joined #gluster
04:49 BlackoutWNCT Hey Guys, I've got 2 bricks in a replica two currently that are extremely out of sync. It's only these two bricks, and both of the machines in this replica have 3 bricks ea.
04:49 BlackoutWNCT Actually, scrap that, it's all bricks in this replica.
04:49 BlackoutWNCT But all bricks are shown as online.
04:50 BlackoutWNCT These two machines just aren't able to sync with one another.
04:50 BlackoutWNCT Anyone got any ideas?
04:50 skumar joined #gluster
04:51 farhorizon joined #gluster
04:52 skoduri joined #gluster
04:53 BlackoutWNCT ok, I think I fixed it actually, restarted the glusterfs-server service on both machines.
04:53 BlackoutWNCT Log now says that it's self healing.
04:53 BlackoutWNCT Thanks for all the help guys :D
04:59 jiffin joined #gluster
05:00 Shu6h3ndu joined #gluster
05:00 prasanth joined #gluster
05:02 moneylotion joined #gluster
05:14 Humble joined #gluster
05:23 XpineX joined #gluster
05:24 karthik_us joined #gluster
05:24 kotreshhr joined #gluster
05:28 Prasad joined #gluster
05:39 ndarshan joined #gluster
05:42 hgowtham joined #gluster
05:52 apandey joined #gluster
06:02 msvbhat joined #gluster
06:07 sanoj joined #gluster
06:11 sahina joined #gluster
06:12 skoduri joined #gluster
06:13 susant joined #gluster
06:16 kdhananjay joined #gluster
06:17 Karan joined #gluster
06:40 sona joined #gluster
06:41 _KaszpiR_ joined #gluster
06:50 ashiq joined #gluster
07:04 jkroon joined #gluster
07:09 susant joined #gluster
07:18 mbukatov joined #gluster
07:18 BitByteNybble110 joined #gluster
07:24 [diablo] joined #gluster
07:34 nbalacha joined #gluster
07:43 kramdoss_ joined #gluster
07:46 Wizek__ joined #gluster
07:50 MrAbaddon joined #gluster
07:56 ahino joined #gluster
08:02 panina joined #gluster
08:27 buvanesh_kumar joined #gluster
08:29 xiu b 12
08:30 apandey_ joined #gluster
08:35 delhage joined #gluster
08:43 Seth_Karlo joined #gluster
08:44 itisravi joined #gluster
08:45 msvbhat joined #gluster
08:45 nbalacha joined #gluster
08:48 Seth_Kar_ joined #gluster
08:48 nbalacha joined #gluster
08:51 atinm joined #gluster
08:52 jcookeman joined #gluster
08:52 skoduri joined #gluster
08:54 rastar joined #gluster
08:54 jcookeman Hi channel. I was curious if I was looking in the right place. So, here goes. We are using OpenShift, and need shared block storage with Gluster volumes. We will have multiple Gluster nodes in multiple AZs and will use NFS-Ganesha. Will the volumes be converged across multiple hosts sharing the same bricks on nfs-ganesha?
08:55 apandey__ joined #gluster
08:58 ankitr joined #gluster
09:03 Seth_Karlo joined #gluster
09:04 nbalacha Hi everyone. We are planning to improve the gluster documentation and it would help if we could get your feedback. This has already been posted on gluster-users ML but sending this on IRC in case someone here is not subscribed to the ML.
09:04 ankitr joined #gluster
09:05 nbalacha please send any doc feedback you have to gluster-users@gluster.org
09:07 hgowtham joined #gluster
09:10 cloph first feedback is don't again break all the links from searchengines ;->
09:13 sahina joined #gluster
09:15 susant joined #gluster
09:17 rastar joined #gluster
09:26 ankitr joined #gluster
09:26 poornima joined #gluster
09:29 nbalacha joined #gluster
09:32 aravindavk joined #gluster
09:35 ankitr joined #gluster
09:37 ankitr joined #gluster
09:39 ashiq joined #gluster
09:40 ppai joined #gluster
09:41 hgowtham joined #gluster
09:43 jiffin jcookeman: nfs-ganesha can export a gluster volume, so nfs-ganesha should connect to all the bricks
09:45 jcookeman jiffin: Right, I'm just trying to verify that nfs-ganesha can export the Gluster volume from multiple Gluster hosts in that replica
09:46 Seth_Karlo joined #gluster
09:46 jiffin Yup
09:46 jiffin u need to turn enable cache-invalidation feature
09:46 Seth_Kar_ joined #gluster
09:47 jcookeman So if I have multiple nodes writing to the nfs-ganesha export to different node members of the Gluster  over nfs-ganesha
09:47 jcookeman There's a mouthful
09:47 jcookeman And I believe I did enable that feature
09:48 jiffin Yes
09:48 kdhananjay joined #gluster
09:51 jcookeman jiffin: well I'm excited than!
09:52 kramdoss_ joined #gluster
09:54 msvbhat joined #gluster
10:02 atinm joined #gluster
10:07 Seth_Karlo joined #gluster
10:08 susant joined #gluster
10:14 mb_ joined #gluster
10:16 rafi joined #gluster
10:22 mb_ joined #gluster
10:23 mb_ joined #gluster
10:25 kramdoss_ joined #gluster
10:35 jcookeman jiffin: what packages are required to mount a nfs-ganesha export?
10:39 msvbhat joined #gluster
10:39 apandey joined #gluster
10:40 cloph from client side it is just regular nfs, nothing special needed.
10:41 sac` joined #gluster
10:42 sona joined #gluster
10:46 rastar joined #gluster
10:51 csaba joined #gluster
11:05 mbukatov joined #gluster
11:11 jcookeman cloph: yesterday I could not mount it. Perhaps I need to specify nfs4?
11:11 jcookeman I was getting a bad superblock error
11:11 jcookeman Then I installed the gluster packages and it worked.
11:12 cloph how did you try to mount it, surely not via nfs, since nfs wouldn't care about superblock or the real filesystem layout
11:18 neferty is there a way to make the varius `performance.` settings use a different default for new volumes?
11:18 neferty instead of setting them individually for each volume?
11:18 aravindavk joined #gluster
11:28 sahina joined #gluster
11:28 Alghost joined #gluster
11:29 vbellur joined #gluster
11:29 vbellur joined #gluster
11:30 vbellur joined #gluster
11:31 vbellur joined #gluster
11:32 vbellur joined #gluster
11:32 vbellur joined #gluster
11:33 vbellur joined #gluster
11:33 vbellur joined #gluster
11:34 vbellur joined #gluster
11:35 vbellur joined #gluster
11:40 Alghost joined #gluster
11:42 rafi joined #gluster
11:52 sahina joined #gluster
11:56 ashiq joined #gluster
11:58 skoduri joined #gluster
12:00 buvanesh_kumar_ joined #gluster
12:06 marbu joined #gluster
12:06 susant left #gluster
12:09 ppai joined #gluster
12:11 Seth_Karlo joined #gluster
12:15 Seth_Karlo joined #gluster
12:20 msvbhat joined #gluster
12:53 fsimonce joined #gluster
13:02 buvanesh_kumar joined #gluster
13:02 nbalacha joined #gluster
13:06 susant joined #gluster
13:13 jcookeman cloph: sorry let me look further.
13:15 susant left #gluster
13:22 farhorizon joined #gluster
13:28 kotreshhr left #gluster
13:31 hgowtham joined #gluster
13:39 plarsen joined #gluster
14:01 zcourts joined #gluster
14:02 vbellur joined #gluster
14:11 hgowtham joined #gluster
14:32 amosbird joined #gluster
14:35 cornfed78 joined #gluster
14:36 Wizek__ joined #gluster
14:42 bit4man joined #gluster
14:48 Seth_Kar_ joined #gluster
14:50 Seth_Kar_ joined #gluster
14:51 nbalacha Hi - sending this again for the benefit of those who joined the channel recently. We are planning to improve the gluster documentation and it would help if we could get your feedback. This has already been posted on gluster-users ML but sending this on IRC in case someone here is not subscribed to the ML.
15:01 wushudoin joined #gluster
15:06 shyam joined #gluster
15:11 kpease joined #gluster
15:14 riyas joined #gluster
15:15 ashiq joined #gluster
15:18 JoeJulian neferty: There may be a way using hooks, see /var/lib/glusterd/hooks. I don't believe there's any documentation for their use, however.
15:18 neferty :/
15:19 JoeJulian It's pretty straightforward if you look at the structure and the existing contents.
15:19 Seth_Karlo joined #gluster
15:19 neferty that's fair, but "it's not documented" is not exactly a good selling point when i'm trying to get my team on board :)
15:20 JoeJulian Pull requests accepted. ;)
15:20 neferty heh, maybe
15:21 JoeJulian There are also groups that you can use to set a group of config settings, but you'd still have to apply that group per-volume.
15:21 neferty the thing is that volume creation is done programatically for me through heketi, so i'd have to edit heketi sources to hack that into it
15:22 JoeJulian Ah, I was looking at the gluster documentation and suddenly I realized that hooks were a feature added for the downstream Red Hat Storage product. https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Console_Administration_Guide/Console_Administration_Guide-Managing_Hooks.html
15:22 glusterbot Title: Chapter 6. Managing Gluster Hooks (at access.redhat.com)
15:25 JoeJulian Soon... https://github.com/humblec/heketi/commit/1151fb50bcce48c7310f6af47ae8471a4fb7e1d5
15:25 glusterbot Title: Set Gluster Volume options based on user input · humblec/heketi@1151fb5 · GitHub (at github.com)
15:25 nbalacha joined #gluster
15:26 neferty ah, good catch
15:28 JoeJulian Also Rook is adding glusterfs support. Probably not production usable for a few months at least, but that's something else to keep on your radar.
15:34 aravindavk joined #gluster
15:35 cloph question re remove brick on a distributed volume: is the remove-brick start meant to move everything? What is the difference to commit?
15:35 cloph will commit just pull the plug and loose the files on the brick?
15:35 JoeJulian It is, yes.
15:37 JoeJulian yes, commit will finalize the change and the old brick will no longer be part of the volume.
15:37 kkeithley in case you missed [10:51:37] <nbalacha> Hi - sending this again for the benefit of those who joined the channel recently. We are planning to improve the gluster documentation and it would help if we could get your feedback. This has already been posted on gluster-users ML but sending this on IRC in case someone here is not subscribed to the ML.
15:37 kkeithley you might make a point of asking for the Managing Hooks bits to get added to the upstream docs
15:37 JoeJulian I did and was going to ask more information, but nbalacha is no longer in channel.
15:37 cloph But I cannot use commit without the start and wait otherwise I'll loose data, right?
15:38 JoeJulian No, but you will lose data.
15:38 JoeJulian It never comes loose.
15:38 cloph ah, those typos..
15:38 * JoeJulian laughs at my own joke.
15:38 kkeithley their you go again
15:39 JoeJulian I need one of Jeff's new T-Shirts.
15:39 kkeithley which one is that?
15:39 JoeJulian https://twitter.com/JoeCyberGuru/status/874252058500636672
15:39 JoeJulian er
15:39 JoeJulian https://twitter.com/Obdurodon/status/874251764538773506
15:43 cloph don't get the "You're evil" comment.. (C not symmetrical and thus triggering OCD?? or am I missing the obvious?)
15:44 zcourts joined #gluster
15:46 guhcampos joined #gluster
15:51 JoeJulian Click on the tweet or you'll probably have the second word cut off.
15:52 cloph ah!
15:53 misc remind me of https://media.boingboing.net/wp-content/uploads/2016/06/056c026d-1c66-4d42-9fae-a8e96df290c5-1020x1102.jpg
15:57 cloph https://www.facebook.com/spectremediagroup/photos/a.804713236269205.1073741829.804599542947241/1496492090424646/?type=3&amp;theater is similar (although different backgorund - he likes to mock bass players because they don't know the material when hitting the studio for recording/never practice/never change strings,....)
15:57 glusterbot Title: Timeline Photos (at www.facebook.com)
16:05 zcourts joined #gluster
16:05 rastar joined #gluster
16:22 ivan_rossi left #gluster
16:28 Gambit15 joined #gluster
16:31 jkroon joined #gluster
16:36 rastar joined #gluster
16:37 Seth_Karlo joined #gluster
17:01 rafi1 joined #gluster
17:36 jiffin joined #gluster
17:36 shyam joined #gluster
17:47 skoduri joined #gluster
17:50 k0nsl joined #gluster
17:50 k0nsl joined #gluster
18:04 susant joined #gluster
18:18 om2 joined #gluster
18:43 Karan joined #gluster
18:46 jiffin joined #gluster
18:49 ahino joined #gluster
19:38 dgandhi joined #gluster
20:50 rastar joined #gluster
21:08 primehaxor joined #gluster
21:24 shyam joined #gluster
21:30 ndboost joined #gluster
21:30 ndboost afternoon all
21:30 ndboost new to gluster, i have 4 physical disks (5gb ea) formatted as ext4 and mounted to /mnt/gluster/vol0{1,2,3,4}
21:31 ndboost how can i use these four disks with gluster?
21:31 ndboost would i create a brick for each disk?
21:31 ndboost or vol?
21:32 ndboost so if i understand right, one brick for each phys disk at /mnt/gluster/vol0x and then a vol combining all those bricks
21:57 zcourts joined #gluster
22:22 Alghost joined #gluster
22:23 om2 joined #gluster
23:06 e1z0 joined #gluster
23:06 tyler-wong joined #gluster
23:09 gyadav joined #gluster
23:47 gyadav joined #gluster
23:57 Alghost joined #gluster
23:57 MrAbaddon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary