Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-07-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 Alghost joined #gluster
00:11 ic0n_ joined #gluster
00:39 Alghost joined #gluster
00:42 gyadav__ joined #gluster
00:54 gyadav__ joined #gluster
01:00 vbellur joined #gluster
01:07 Alghost joined #gluster
01:14 purpleidea joined #gluster
01:14 purpleidea joined #gluster
01:31 daMaestro joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 Alghost_ joined #gluster
02:15 plarsen joined #gluster
02:27 gyadav__ joined #gluster
02:44 prasanth joined #gluster
02:49 gyadav__ joined #gluster
02:56 ashiq joined #gluster
03:05 kramdoss_ joined #gluster
03:36 Saravanakmr joined #gluster
03:37 susant joined #gluster
03:38 nbalacha joined #gluster
03:50 itisravi joined #gluster
03:57 atinm joined #gluster
04:03 skumar joined #gluster
04:24 Alghost joined #gluster
04:31 jiffin joined #gluster
04:35 Shu6h3ndu joined #gluster
04:43 ppai joined #gluster
04:52 gyadav__ joined #gluster
04:56 hemangpatel joined #gluster
04:56 hemangpatel Hi, morning
04:57 hemangpatel What's the min hardware requirement for GlusterFS ?
05:04 sanoj joined #gluster
05:10 amarts joined #gluster
05:10 ankitr joined #gluster
05:14 karthik_us joined #gluster
05:15 susant joined #gluster
05:19 DV joined #gluster
05:30 hemangpatel How do I know that caching is anabled on client side and It's working, http://blog.gluster.org/author/dlambrig/
05:32 k0nsl_ joined #gluster
05:36 hgowtham joined #gluster
05:37 kotreshhr joined #gluster
05:40 prasanth joined #gluster
05:42 apandey joined #gluster
05:43 sanoj joined #gluster
05:46 ashiq joined #gluster
05:47 Karan joined #gluster
05:54 hemangpatel Where can I see this translation chain like this ? http://lists.gluster.org/pipermail/gluster-devel/2008-July/033495.html
05:54 glusterbot Title: [Gluster-devel] how to set the default read/write block size for all transactions for optimal performance (e.g. anything similar to rsize, wsize nfs options?) (at lists.gluster.org)
05:54 Saravanakmr joined #gluster
06:12 sona joined #gluster
06:20 aravindavk joined #gluster
06:25 Saravanakmr joined #gluster
06:31 msvbhat joined #gluster
06:32 skumar joined #gluster
06:32 hgowtham joined #gluster
06:33 [diablo] joined #gluster
06:35 pioto joined #gluster
06:36 kdhananjay joined #gluster
06:40 partner joined #gluster
06:40 ashka joined #gluster
06:40 ashka joined #gluster
06:41 rossdm joined #gluster
06:41 rossdm joined #gluster
06:41 Ulrar joined #gluster
06:42 mlg9000 joined #gluster
06:42 matt_ joined #gluster
06:43 vbellur joined #gluster
06:44 jkroon joined #gluster
06:47 rafi joined #gluster
06:50 skoduri joined #gluster
06:50 skumar joined #gluster
07:02 susant joined #gluster
07:06 ankitr joined #gluster
07:09 itisravi joined #gluster
07:14 hemangpatel joined #gluster
07:23 ivan_rossi joined #gluster
07:26 Karan joined #gluster
07:31 skumar_ joined #gluster
07:43 Wizek_ joined #gluster
07:48 _KaszpiR_ joined #gluster
08:01 fsimonce joined #gluster
08:39 aravindavk joined #gluster
08:45 aravindavk joined #gluster
08:55 aravindavk joined #gluster
08:55 skumar_ joined #gluster
09:00 itisravi joined #gluster
09:01 aravindavk joined #gluster
09:19 aravindavk joined #gluster
09:27 apandey joined #gluster
09:27 jkroon joined #gluster
09:34 susant joined #gluster
09:52 msvbhat joined #gluster
09:54 aravindavk joined #gluster
10:07 aravindavk joined #gluster
10:20 skumar_ joined #gluster
10:27 sanoj joined #gluster
10:30 shyam joined #gluster
10:39 mbukatov joined #gluster
10:43 amarts joined #gluster
10:44 marbu joined #gluster
10:49 uebera|| joined #gluster
10:49 uebera|| joined #gluster
10:50 uebera|| joined #gluster
10:53 Alghost joined #gluster
11:04 ogelpre my gluster  nodes are all dual stacked. but gluster only listens on ipv4. if i do a peer probe it throws this error: failed: Probe returned with Transport endpoint is not connected
11:04 ogelpre has andybody test gluster in dualstackend environements?
11:07 jkroon ogelpre, mine works.
11:08 ogelpre jkroon: where can i enable ipv6?
11:09 jkroon no idea.  mine uses ipv4 even though there are ipv6 addresses.
11:09 ogelpre jkroon: here it probes first ipv6 and then gives up
11:10 ogelpre gluster 3.10.4
11:10 jkroon when i set it up it was a pure ipv4 environment.
11:12 ogelpre i'll test using ipv4 addresses instead of hostnames
11:12 jkroon does ping6 for the hostname work?
11:13 jkroon and does ip6tables allow the required input?
11:15 ogelpre root@gluster-3:~# netstat -tulpen | grep gluster
11:15 ogelpre tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN      0          21664      2700/glusterd
11:16 ogelpre it doesn't even listen on tcp6
11:16 amarts joined #gluster
11:18 jkroon doesn't mean that the initial outbound connection won't try ipv6.  does the hostname resolve to both ipv6 and ipv4 addresses?
11:20 ogelpre yep
11:20 ogelpre ssh works, ping works
11:20 ogelpre everything is setup with my default ansible playbook
11:20 jkroon ping6?
11:21 ogelpre works
11:21 ogelpre ping works
11:21 jkroon ok, so is ssh connecting using ipv6 by any chance?
11:23 xavih joined #gluster
11:24 Wizek_ joined #gluster
11:24 ogelpre jkroon: ssh works with -4 and -6. there are no iptables rules at all
11:25 ogelpre the outbound connection tries ipv6
11:25 ogelpre but glusterd is not listening on ipv6
11:26 jkroon http://lists.gluster.org/pipermail/gluster-users/2017-February/029938.html
11:26 glusterbot Title: [Gluster-users] IPv4 / IPv6 doesn't work (at lists.gluster.org)
11:27 jkroon option transport.address-family inet inet4 <-- may be what you're looking for.
11:27 glusterbot jkroon: <'s karma is now -30
11:28 baber joined #gluster
11:31 ogelpre jkroon: hm, that doesnt help me at probe time, becaus i can set it only per volume?
11:32 jkroon fair enough :)
11:32 ic0n_ joined #gluster
11:32 jkroon can you make gluster listen on ipv6 as well?
11:33 jkroon are you explicitly setting the bind address?
11:34 jkroon hmm, looks like you can
11:34 jkroon glusterd.vol file, under volume management, mine has a line that's commented by default:  option transport.address-family inet6
11:34 kshlm ogelpre, You can edit the /etc/glusterfs/glusterd.vol to set address-family
11:35 kshlm jkroon, You found it as well :)
11:35 ankitr joined #gluster
11:35 jkroon kshlm, most things either end up being right in front of you and you overlook, or google points it out to you.  in this case google was misleading and when I started going through the config files it was there :)
11:36 ogelpre kshlm: if found your issue https://github.com/gluster/glusterfs/issues/192
11:36 glusterbot Title: [RFE] Improve IPv6 support in GlusterFS · Issue #192 · gluster/glusterfs · GitHub (at github.com)
11:36 ogelpre is ipv6 stable enough?
11:37 jkroon kshlm, and for everything else, there is "the source"
11:41 kshlm ogelpre, Cannot comment on that. AFAIK no one among the developers have ever tried glusterfs in a ipv6 environment. And no testing right now happens on it.
11:41 kshlm That's the reason for the issue.
11:42 kshlm RIght now we have a patch from facebook (who are using gluster in a ipv6 env).
11:42 kshlm We hope to get it into the next release.
11:43 ogelpre kshlm: i hope so too
11:48 Wizek_ joined #gluster
11:57 Wizek_ joined #gluster
12:02 Saravanakmr joined #gluster
12:03 plarsen joined #gluster
12:08 major joined #gluster
12:11 skoduri joined #gluster
12:20 arif-ali joined #gluster
12:23 Wizek_ joined #gluster
12:28 Wizek_ joined #gluster
12:28 ogelpre i've deleted volume home and try to recreate ist. But i get this warning: volume create: home: failed: /gluster/home/brick is already part of a volume
12:32 saintpablos joined #gluster
12:37 plarsen joined #gluster
12:50 vbellur joined #gluster
12:51 vbellur1 joined #gluster
12:51 vbellur joined #gluster
12:52 vbellur joined #gluster
12:52 vbellur joined #gluster
12:53 vbellur joined #gluster
12:56 vbellur joined #gluster
13:01 Wizek_ joined #gluster
13:12 Wizek_ joined #gluster
13:12 skumar joined #gluster
13:23 susant joined #gluster
13:24 skylar joined #gluster
13:30 jstrunk joined #gluster
13:32 shyam joined #gluster
13:38 msvbhat joined #gluster
13:38 kramdoss_ joined #gluster
13:41 Wizek_ joined #gluster
13:43 _KaszpiR_ joined #gluster
13:55 shaunm joined #gluster
13:56 susant left #gluster
13:57 vbellur joined #gluster
14:05 kpease joined #gluster
14:18 baber joined #gluster
14:21 Jacob843 joined #gluster
14:24 vbellur joined #gluster
14:30 ankitr joined #gluster
14:31 elico left #gluster
14:34 Wizek__ joined #gluster
14:43 Wizek_ joined #gluster
14:43 fsimonce joined #gluster
14:49 nbalacha joined #gluster
14:50 Wizek__ joined #gluster
14:52 vbellur joined #gluster
14:55 fsimonce joined #gluster
15:05 baber joined #gluster
15:14 hgowtham joined #gluster
15:17 marlinc joined #gluster
15:18 amarts joined #gluster
15:19 msvbhat joined #gluster
15:19 shyam joined #gluster
15:21 vbellur joined #gluster
15:29 kotreshhr left #gluster
15:46 baber joined #gluster
16:00 kramdoss_ joined #gluster
16:06 shaunm joined #gluster
16:07 gyadav__ joined #gluster
16:16 MrAbaddon joined #gluster
16:24 shyam joined #gluster
16:28 baber joined #gluster
16:34 ivan_rossi left #gluster
16:44 vbellur joined #gluster
16:57 Wizek__ joined #gluster
17:01 victori joined #gluster
17:03 ankitr joined #gluster
17:13 mbukatov joined #gluster
17:17 pocketprotector- joined #gluster
17:20 major_ joined #gluster
17:21 skoduri joined #gluster
17:28 mbukatov joined #gluster
17:30 tannerb3 joined #gluster
17:32 Humble joined #gluster
17:34 marbu joined #gluster
17:43 rafi joined #gluster
17:49 tannerb3 I appear to be having a problem with geo-replicationg one gluster volume to another cluster, when I check the geo-replication status it appears one brick is stuck in Hybrid Crawl, while the others are all in Changelog
17:52 shyam joined #gluster
17:55 vbellur joined #gluster
17:58 bit_lySLH2uSZHed joined #gluster
17:59 bit_lySLH2uSZHed left #gluster
18:01 tannerb3 looks like I am missing about 0.9TB out of 6TB
18:01 tannerb3 (according to df)
18:02 tannerb3 I am also seeing a lot of this [fuse-bridge.c:3428:fuse_xattr_cbk] 0-glusterfs-fuse: extended attribute not supported by the backend storage
18:09 Karan joined #gluster
18:10 shyam joined #gluster
18:12 mbukatov joined #gluster
18:15 cliluw joined #gluster
18:15 ic0n_ joined #gluster
18:29 vbellur joined #gluster
18:37 jkroon joined #gluster
18:39 Speccter joined #gluster
18:40 Speccter Hey guys.  When using gluster on a ZFS backend; what happens to bricks & volumes when you expand the ZFS backend^
18:44 tannerb3 if it is similar to LVM (and expanding the lv) gluster will see that it got bigger
18:45 Speccter tannerb3: thanks.  What happens during the time the 2 bricks (replicated) aren't the same size ?
18:45 Speccter will it go temporarily offline or something?
18:47 tannerb3 it should be fine since the actual data isn't changing
18:47 tannerb3 i.e the replica brick should still have enough capacity
18:47 Speccter ok thanks!
18:48 tannerb3 caveat: I'm no gluster expert and could be wrong
18:48 tannerb3 however I _have_ expanded an LVM volume (non replicated) and it went fine
18:49 Speccter ok thanks
18:49 Speccter I'll need to test that ou
18:49 Speccter t*
18:54 vbellur joined #gluster
18:54 vbellur joined #gluster
18:55 vbellur joined #gluster
18:56 vbellur1 joined #gluster
18:57 vbellur joined #gluster
19:01 rafi joined #gluster
19:01 sona joined #gluster
19:06 bit_lySLH2uSZHed joined #gluster
19:08 bit_lySLH2uSZHed left #gluster
19:13 baber joined #gluster
19:19 rafi1 joined #gluster
19:21 Wizek_ joined #gluster
19:30 jkroon joined #gluster
19:31 rafi joined #gluster
19:49 baber joined #gluster
19:55 okabe joined #gluster
19:56 okabe hello, got a couple questions about glusterfs's bricking and pooling
19:56 okabe i have 16 servers, and i want them to have redundancy but also increase the storage capacity
19:57 okabe when the peering happens, is it a p2p based thing or is master slave?
19:57 victori joined #gluster
20:02 rafi1 joined #gluster
20:03 lkoranda joined #gluster
20:39 okabe left #gluster
20:43 Champi joined #gluster
21:31 victori joined #gluster
21:34 gyadav__ joined #gluster
22:06 gyadav__ joined #gluster
22:11 ic0n_ joined #gluster
22:18 ankitr joined #gluster
22:32 daMaestro joined #gluster
22:34 victori joined #gluster
23:02 nirokato joined #gluster
23:06 vbellur joined #gluster
23:11 jkroon joined #gluster
23:15 victori joined #gluster
23:16 Alghost joined #gluster
23:26 mgethers joined #gluster
23:26 mgethers left #gluster
23:29 victori joined #gluster
23:34 shaunm joined #gluster
23:42 jkroon joined #gluster
23:58 jkroon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary