Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-08-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 baojg joined #gluster
00:32 h4rry joined #gluster
00:33 Guest9038 joined #gluster
00:51 artyx joined #gluster
00:51 artyx I have a gluster volume set up composed of 3 bricks on 3 different servers ... Whats the syntax to mount a brick manually.. they say HOSTNAME, is that localhost?
00:52 artyx I see references to backupvolfile-server etc. but i dont think this is any of that. this is a simple (no replication) filesystem comprised of 3 hosts pooling their dirs together
00:57 daMaestro joined #gluster
00:59 daMaestro joined #gluster
01:07 artyx oh .. nevermind, it was sanitizing the damn colon
01:17 artyx rls
01:33 Guest9038 joined #gluster
01:51 ilbot3 joined #gluster
01:51 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:51 aravindavk joined #gluster
01:52 h4rry joined #gluster
01:57 omie888777 joined #gluster
02:02 gospod3 joined #gluster
02:05 prasanth joined #gluster
02:17 jiffin joined #gluster
02:22 vbellur joined #gluster
02:23 baojg joined #gluster
02:39 Wizek_ joined #gluster
02:44 aravindavk joined #gluster
03:11 d0nn1e joined #gluster
03:23 ppai joined #gluster
03:25 alvinstarr1 joined #gluster
03:27 aravindavk_ joined #gluster
03:34 aravindavk joined #gluster
03:37 Guest9038 joined #gluster
03:38 riyas joined #gluster
03:51 itisravi joined #gluster
03:57 rafi joined #gluster
04:00 jiffin joined #gluster
04:02 aravindavk joined #gluster
04:02 scobanx joined #gluster
04:02 mb_ joined #gluster
04:03 scobanx Hi, I cannot create volume using more than 4444 bricks, is this a known restriction?
04:13 kotreshhr joined #gluster
04:18 rafi scobanx: As far as I can think off , there is no hard restriction over number of bricks . But some of the users has reported and identified issues related to management of large cluster. I think a new management daemon will be available soon to address the issues
04:18 rafi scobanx: btw, what kind of problems are you facing ?
04:19 dominicpg joined #gluster
04:19 scobanx rafi: I just cannot create a volume using 5500 bricks. It gives an error abot 4444 briks expected...
04:20 rafi scobanx: let me check that
04:21 rafi scobanx: can you paste the exact error message from glusterd logs
04:21 rafi scobanx: or cli output is also fine
04:22 scobanx rafi: I am at home right now, will do in 2-3 hours
04:22 rafi scobanx: never mind, I will see
04:22 rafi scobanx: thanks
04:27 Shu6h3ndu joined #gluster
04:30 farhorizon joined #gluster
04:33 h4rry joined #gluster
04:38 aravindavk joined #gluster
04:46 atinmu joined #gluster
04:48 flachtassekasse joined #gluster
04:49 poornima joined #gluster
04:52 nbalacha joined #gluster
05:00 aravindavk joined #gluster
05:06 aravindavk joined #gluster
05:09 ankitr joined #gluster
05:09 skumar joined #gluster
05:12 gyadav joined #gluster
05:13 ashiq joined #gluster
05:20 ndarshan joined #gluster
05:23 aravindavk joined #gluster
05:24 karthik_us joined #gluster
05:28 apandey joined #gluster
05:32 susant joined #gluster
05:33 hgowtham joined #gluster
05:38 hgowtham joined #gluster
05:43 hgowtham joined #gluster
05:45 prasanth joined #gluster
05:50 atinmu joined #gluster
05:52 Saravanakmr joined #gluster
05:55 masber joined #gluster
05:56 h4rry joined #gluster
06:16 rastar joined #gluster
06:19 sona joined #gluster
06:26 msvbhat joined #gluster
06:31 rafi1 joined #gluster
06:33 Saravanakmr joined #gluster
06:35 ashiq joined #gluster
06:38 jtux joined #gluster
06:38 itisravi joined #gluster
06:49 ndarshan joined #gluster
06:53 ppai joined #gluster
06:53 apandey joined #gluster
06:57 aravindavk joined #gluster
06:59 bEsTiAn joined #gluster
07:15 gyadav_ joined #gluster
07:17 ivan_rossi joined #gluster
07:23 Humble joined #gluster
07:29 atinmu joined #gluster
07:31 gyadav__ joined #gluster
07:32 fsimonce joined #gluster
07:38 ashiq joined #gluster
07:41 msvbhat joined #gluster
07:47 marbu joined #gluster
07:48 _KaszpiR_ joined #gluster
07:50 msvbhat joined #gluster
07:51 Saravanakmr joined #gluster
07:57 ndarshan joined #gluster
07:57 ppai joined #gluster
08:36 Acinonyx joined #gluster
08:54 side_control joined #gluster
08:58 MrAbaddon joined #gluster
08:58 ton31337 joined #gluster
08:59 ton31337 I have a problem from the client when using the mount command
08:59 ton31337 [2017-08-22 08:57:18.781431] I [addr.c:182:gf_auth] 0-/data/sda1/backup: allowed = "10.*", received addr = "external_ip"
08:59 ton31337 why received addr is external, while I use internal with mount command
09:02 skumar_ joined #gluster
09:04 apandey_ joined #gluster
09:12 ThHirsch joined #gluster
09:16 henkjan joined #gluster
09:16 henkjan hi
09:16 glusterbot henkjan: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer a
09:17 henkjan is it default gluster behaviour to start at a portnumber+1 after a volume stop;volume start ?
09:20 karthik_us joined #gluster
09:23 h4rry joined #gluster
09:24 ThHirsch joined #gluster
09:27 ThHirsch joined #gluster
09:30 baojg joined #gluster
09:37 buvanesh_kumar joined #gluster
09:43 itisravi joined #gluster
09:43 apandey__ joined #gluster
09:43 karthik_ joined #gluster
09:52 gyadav_ joined #gluster
09:53 msvbhat joined #gluster
09:56 gyadav__ joined #gluster
10:19 poornima joined #gluster
10:26 henkjan ah, found https://bugzilla.redhat.com/show_bug.cgi?id=1427461
10:26 glusterbot Bug 1427461: medium, medium, ---, sbairagy, CLOSED CURRENTRELEASE, Bricks take up new ports upon volume restart after add-brick op with brick mux enabled
10:26 henkjan i need to upgrade :)
10:28 skumar_ joined #gluster
10:33 mbukatov joined #gluster
10:34 alvinstarr1 joined #gluster
10:45 cyberbootje joined #gluster
10:45 bfoster joined #gluster
10:45 karthik_ joined #gluster
10:51 ritzo joined #gluster
10:51 gospod3 joined #gluster
10:54 mbukatov joined #gluster
11:00 TBlaar joined #gluster
11:02 zerick joined #gluster
11:02 Klas joined #gluster
11:02 tru_tru joined #gluster
11:04 dataio joined #gluster
11:05 MrAbaddon joined #gluster
11:05 csaba joined #gluster
11:05 foster joined #gluster
11:05 tom[] joined #gluster
11:08 ankitr joined #gluster
11:09 d-fence joined #gluster
11:09 d-fence_ joined #gluster
11:09 ankitr joined #gluster
11:15 ackjewt joined #gluster
11:16 georgeangel[m] joined #gluster
11:21 siel joined #gluster
11:22 samppah joined #gluster
11:36 skumar_ joined #gluster
11:38 MikeLupe joined #gluster
11:42 Gugge joined #gluster
11:43 scobanx joined #gluster
11:44 scobanx Here is the cmd line output for 4444 brick error: Total brick list is larger than a request. Can take (brick_count 4444)
11:45 Vaelatern joined #gluster
11:45 arpu joined #gluster
11:49 shyam joined #gluster
11:59 ahino joined #gluster
12:00 flachtassekasse joined #gluster
12:18 buvanesh_kumar joined #gluster
12:18 Wizek_ joined #gluster
12:23 scobanx joined #gluster
12:28 scobanx I think I found the limit https://fossies.org/linux/glusterfs/cli/src/cli-cmd-parser.c line 84
12:28 glusterbot Title: GlusterFS: cli/src/cli-cmd-parser.c | Fossies (at fossies.org)
12:29 scobanx Can anyone look at this and confirm? Can gluster-devs increase this limit?
12:39 baber joined #gluster
12:48 Klas do you have 4444 bricks in the same gluster setup?
12:50 scobanx klas: yes 5600 actually
12:51 Klas impressive =)
12:56 baojg joined #gluster
13:03 gyadav__ joined #gluster
13:05 Shu6h3ndu joined #gluster
13:16 plarsen joined #gluster
13:17 skylar joined #gluster
13:21 ahino1 joined #gluster
13:21 skumar joined #gluster
13:22 fsimonce joined #gluster
13:24 h4rry joined #gluster
13:36 bens_ joined #gluster
13:38 rwheeler joined #gluster
13:40 Shu6h3ndu joined #gluster
13:41 bens_ I added a some new bricks to my Gluster installation today (2 x servers (replica 2) running version 3.6.1-1.el6) - adding the bricks went well and disk space was expanded...
13:41 bens_ However, for about 1-2 hours afterwards the clients could not get directory listings from some of the dirs on the gluster share, then after a while ALL of the directories\files could be retrieved as normal.
13:42 bens_ I wasn't expecting this behaviour tbh, what may have caused these 'consistentcy' issues?
13:42 bens_ can this happening whilst it is rebalancing the volume?
13:43 bens_ happen
13:49 plarsen joined #gluster
13:55 buvanesh_kumar joined #gluster
14:05 fsimonce joined #gluster
14:05 fsimonce joined #gluster
14:07 mbukatov joined #gluster
14:12 prasanth joined #gluster
14:19 Wizek_ joined #gluster
14:21 dominicpg joined #gluster
14:23 shyam joined #gluster
14:26 bens_ Any clues why after adding a brick, ^^^^, the clients could not list all directories etc?
14:29 cloph distributed one? Not rebalanced/fixed-the-layout?
14:39 shyam joined #gluster
14:44 MrAbaddon joined #gluster
14:56 bens_ i ran the fix layout rebalance command
14:56 _KaszpiR_ joined #gluster
14:56 bens_ it took forever to complete however (and I think that once it finished, the files were available again)
14:57 bens_ just didn't expect to see such issues whilst rebalancing
14:57 bens_ (it may be normal and expected however : )
14:58 cloph me neither - I'd expect it to just take longer (first computed hash misses, then would need to looks on all bricks until the volume is flagged as rebalanced, but who knows...)
14:59 jbrooks joined #gluster
14:59 Guest9038 joined #gluster
15:02 farhorizon joined #gluster
15:04 omie888777 joined #gluster
15:04 wushudoin joined #gluster
15:04 atinmu joined #gluster
15:04 wushudoin joined #gluster
15:14 gyadav__ joined #gluster
15:24 rastar joined #gluster
15:28 jarbod joined #gluster
15:31 hosom joined #gluster
15:32 jiffin joined #gluster
15:32 alvinstarr1 joined #gluster
15:36 hosom so what seems to stop my fuse client from hanging on heavy load is disabling write behind
15:37 hosom I'm not using replicas--didn't realize that there were common problems with write behind outside of replicated volumes
15:37 glusterbot hosom: replicas's karma is now -1
15:42 kpease joined #gluster
15:47 d0minicpg joined #gluster
15:48 d0minicpg joined #gluster
15:49 baber joined #gluster
15:52 ankitr joined #gluster
16:01 scobanx joined #gluster
16:01 MikeLupe JoeJulian: Are you here? I need e little help
16:01 MikeLupe "e little" sound cute, doesn't it?
16:02 joshin joined #gluster
16:02 joshin joined #gluster
16:03 joshin left #gluster
16:04 XpineX_ joined #gluster
16:04 scobanx any devs online?
16:12 jkroon joined #gluster
16:14 XpineX joined #gluster
16:15 ivan_rossi left #gluster
16:16 baber joined #gluster
16:25 h4rry joined #gluster
16:28 jiffin joined #gluster
16:33 aravindavk joined #gluster
16:34 _KaszpiR_ joined #gluster
16:40 omie888777 joined #gluster
16:42 jiffin joined #gluster
16:44 jiffin joined #gluster
16:46 luizcpg joined #gluster
16:49 jiffin1 joined #gluster
16:52 luizcpg joined #gluster
16:55 jiffin joined #gluster
16:56 rwheeler joined #gluster
16:57 sona joined #gluster
17:02 riyas joined #gluster
17:03 jiffin joined #gluster
17:03 shaunm joined #gluster
17:03 jiffin joined #gluster
17:11 Humble joined #gluster
17:14 jiffin joined #gluster
17:15 Humble joined #gluster
17:18 jiffin joined #gluster
17:21 ritzo joined #gluster
17:22 skumar joined #gluster
17:23 ahino joined #gluster
17:38 baojg joined #gluster
17:40 jstrunk joined #gluster
17:42 kotreshhr joined #gluster
17:51 farhorizon joined #gluster
17:54 hosom anyone have a number for 'good' write latency?
17:54 hosom I'm seeing min latency 90.74 us max latency 1631.00 us, which seems a bit... high... to me
18:00 MikeLupe JoeJulian: Just writing to you created a good spirit and recovered my gluster bricks. Was a quorum problem after a DNS server move - one of the nodes missed a DNS entry...don't ask why.
18:00 MrAbaddon joined #gluster
18:02 MikeLupe JoeJulian: but maybe you have a hint about gracefully shutting down my R3A1 gluster nodes. I'll have a building power shutdown in 3 hrs...unfortunately for about 6 hrs, and no 7hrs Diesel here...
18:09 susant joined #gluster
18:09 farhorizon joined #gluster
18:13 stybla joined #gluster
18:15 Vaelatern joined #gluster
18:18 stybla hello. I have a gfs cluster with 4 nodes and volumes have 2x2 bricks. 3 servers are running v3.7.14, one has been updated to 3.11.3. there was some struggle with the update after which gfs itself seems to be working, but bricks/remote processes are not visible in % gluster volume status;, resp. v3.7 nodes can see v3.7 bricks(3) and v3.11 node can see only itself(1). % gluster peer status; shows 4 nodes. is this expected? I couldn't figure out what's wron
18:34 baber joined #gluster
18:34 kotreshhr left #gluster
18:39 susant joined #gluster
18:52 _KaszpiR_ joined #gluster
19:03 ashiq joined #gluster
19:10 skylar joined #gluster
19:17 stybla I can see bricks connected in clients, just not listed as bricks. yeah, node restarts didn't help.
19:21 jbrooks joined #gluster
19:26 jbrooks joined #gluster
19:28 jbrooks joined #gluster
19:44 fidelrodriguez joined #gluster
19:45 ritzo joined #gluster
19:46 fidelrodriguez I am having issues when adding hot tier on glusterfs-server.x86_64 0:3.11.0-0.1.rc0.el7
19:47 fidelrodriguez I get [cli-cmd-parser.c:1771:cli_cmd_volume_add_brick_parse] 0-cli: Unable to parse add-brick CLI
19:47 fidelrodriguez nothing further shows in the logs to help me further troubleshoot. Can anyone help?
19:48 cloph unable to parse looks like you got a syntax error in yourcommand - so what exact command did you try to run?
19:49 fidelrodriguez gluster volume tier vmVolume attach replica 2 tcp  glusterfs1:/caching/.brickscaching  glusterfs2:/caching/.brickscaching  glusterfs3:/caching/.brickscaching glusterfs4:/caching/.brickscaching  force
19:52 scobanx joined #gluster
19:55 fidelrodriguez when i use the old command i get better error
19:55 fidelrodriguez gluster volume attach-tier vmVolume replica 2  glusterfs1:/caching/.brickscaching  glusterfs2:/caching/.brickscaching  glusterfs3:/caching/.brickscaching glusterfs4:/caching/.brickscaching  force
19:56 fidelrodriguez gluster volume attach-tier <VOLNAME> [<replica COUNT>] <NEW-BRICK>... is deprecated. Use the new command 'gluster volume tier <VOLNAME> attach [<replica COUNT>] <NEW-BRICK>... [force]'
19:56 fidelrodriguez Do you want to Continue? (y/n) y
19:56 fidelrodriguez internet address ' glusterfs1' does not conform to standards
19:56 fidelrodriguez volume attach-tier: failed: Pre-validation failed on localhost. Please check log file for details
19:56 fidelrodriguez Tier command failed
19:58 cloph hmm - sure about the tcp in the attach command?
19:59 cloph also force in old command is not listed..
20:00 cloph and should be careful with force anyway...
20:01 fidelrodriguez i am not sure I tried a lot of different order and using old command and new format but it doesn't want to work. I even tried to upgrade gluster to 3.11 using "yum --enablerepo=centos-gluster*-test install glusterfs-server"
20:02 cloph if you installed without reboot in between: sure the new gluster is actually running?
20:03 fidelrodriguez my apologies i rebooted all 4 nodes after installation
20:04 cloph no need to apologize - /me just plays rubberduck, as I don't have experience with tiered volumes..
20:06 fidelrodriguez me neither its my first time i am implementing tier
20:07 fidelrodriguez i just need to have my ssd in front to do caching basically
20:09 baojg joined #gluster
20:12 fidelrodriguez Does anyone know if I am doing something wrong in setting up tier volume?
20:31 m0zes joined #gluster
20:32 MikeLupe joined #gluster
20:38 baojg joined #gluster
21:05 n-st joined #gluster
21:09 n-st joined #gluster
21:34 gospod3 joined #gluster
21:53 anthony25 joined #gluster
21:55 shyam joined #gluster
22:12 baojg joined #gluster
22:16 omie888777 joined #gluster
22:20 baojg joined #gluster
22:25 baojg joined #gluster
22:27 anthony25 joined #gluster
22:28 arpu joined #gluster
22:30 baojg joined #gluster
22:52 anthony25 joined #gluster
23:06 baojg joined #gluster
23:23 baojg joined #gluster
23:41 plarsen joined #gluster
23:41 baojg joined #gluster
23:46 baojg joined #gluster
23:52 baojg joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary