Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 srsc joined #gluster
00:23 dahu33 joined #gluster
00:33 srsc having a stange issue creating a new distributed six node volume with gluster 3.9.0. if i specify a single transport (rdma or tcp) it works fine. if i specify multiple transports (gluster volume create volname transport rdma,tcp...) the volume is created fine but there are errors when trying to `gluster volume start volname`
00:35 srsc the error is always on the first node, which is where i'm running the `gluster volume create` commands
00:35 srsc https://paste.fedoraproject.org/542654/95679148/
00:35 glusterbot Title: #542654 • Fedora Project Pastebin (at paste.fedoraproject.org)
00:35 srsc the hostnames i'm using in the gluster volume create command are infiniband addresses (ipoib)
00:41 Klas joined #gluster
00:43 srsc same thing happens when i use normal tcp hostname in `gluster volume create`
00:52 JoeJulian Well that's not very helpful, is it?
00:52 JoeJulian Is that from gserver01?
00:52 srsc yes, glusterd.log on gserver01
00:54 JoeJulian Any clue in bricks/media-brick-data.log?
00:57 srsc hmm, yes. [2017-02-02 00:51:38.158189] E [glusterfsd-mgmt.c:1925:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: gserver01.ib.ourdomain (Transport endpoint is not connected)
00:58 srsc so it can't connect to...itself
00:58 JoeJulian I wonder if "remote-host" is normal...
00:58 srsc that hostname is resolvable from that server
00:59 JoeJulian ok, that was my next q
00:59 JoeJulian is glusterd listening on 24008?
01:00 srsc nope, only 24007
01:01 JoeJulian which distro?
01:01 srsc debian jessie (8.7), amd64
01:04 JoeJulian You don't have a line that looks like, "0-rpc-transport: /usr/lib/glusterfs/$version/rpc-transport/rdma.so: cannot open shared object file: No such file or directory" in your glusterd log, do you?
01:06 john51 joined #gluster
01:07 srsc no, it doesn't contain that
01:08 srsc is there any downside to running over rdma transport vs tcp,rdma? from the docs it seems like clients can still connect using tcp?
01:10 JoeJulian If the clients can still connect and mount, I can think of no piece that would pose a problem.
01:10 srsc thanks, i'll experiment with that
01:10 shutupsquare joined #gluster
01:17 srsc the transport endpoint not connect error may have been a red herring on some dns issues i resolved a while ago. it doesn't reappear in the logs when i retry volume creation with both transports.
01:21 srsc also it looks like clients can't mount rdma-only transport volumes using tcp
01:27 victori joined #gluster
01:34 srsc i'm thinking something must be wrong with my rdma setup. i can create a single transport rdma volume without error, but i get "Input/output error"s when i try to write test data to the rdma mounted volume
01:34 srsc odd because the standard iperf, ib_read_bw, etc tools seem to work fine
01:35 victori joined #gluster
01:42 susant joined #gluster
01:54 arpu joined #gluster
02:01 Gambit15 joined #gluster
02:02 daMaestro joined #gluster
02:04 victori joined #gluster
02:05 daMaestro joined #gluster
02:25 derjohn_mob joined #gluster
02:29 dahu33 joined #gluster
02:33 dahu33 joined #gluster
02:37 plarsen joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:52 tg2 joined #gluster
03:02 cacasmacas joined #gluster
03:11 srsc yeah, i think this was an infiniband setup problem on my end. sorry for false alarm.
03:11 srsc left #gluster
03:12 susant left #gluster
03:13 dahu33 joined #gluster
03:16 dahu33 joined #gluster
03:30 gem joined #gluster
03:30 dahu33 joined #gluster
03:33 dahu33 joined #gluster
03:41 vbellur joined #gluster
03:45 mb_ joined #gluster
03:46 magrawal joined #gluster
03:59 buvanesh_kumar joined #gluster
04:08 poornima joined #gluster
04:08 gem joined #gluster
04:13 itisravi joined #gluster
04:15 rjoseph joined #gluster
04:18 sanoj joined #gluster
04:19 sbulage joined #gluster
04:22 atinm joined #gluster
04:22 dahu33 joined #gluster
04:28 gem joined #gluster
04:28 nbalacha joined #gluster
04:29 Lee1092 joined #gluster
04:31 ankit joined #gluster
04:34 victori joined #gluster
04:40 Prasad joined #gluster
04:45 skumar joined #gluster
04:52 gem joined #gluster
04:54 armyriad joined #gluster
04:56 RameshN joined #gluster
05:01 dahu33 joined #gluster
05:05 prasanth joined #gluster
05:10 Shu6h3ndu joined #gluster
05:12 dahu33 joined #gluster
05:14 the-me joined #gluster
05:17 karthik_us joined #gluster
05:18 dahu33 joined #gluster
05:18 msvbhat joined #gluster
05:22 bbooth joined #gluster
05:24 sanoj joined #gluster
05:24 dahu33 joined #gluster
05:27 ppai joined #gluster
05:28 buvanesh_kumar joined #gluster
05:29 the-me joined #gluster
05:36 dahu33 joined #gluster
05:37 ndarshan joined #gluster
05:50 riyas joined #gluster
05:51 msvbhat joined #gluster
05:57 dahu33 joined #gluster
05:57 Shu6h3ndu joined #gluster
05:57 apandey joined #gluster
06:03 susant joined #gluster
06:07 msvbhat joined #gluster
06:08 loadtheacc joined #gluster
06:10 sbulage joined #gluster
06:11 rafi joined #gluster
06:13 rastar_away joined #gluster
06:13 gem joined #gluster
06:13 nbalacha joined #gluster
06:14 gyadav joined #gluster
06:15 Klas joined #gluster
06:17 Larsen_ joined #gluster
06:21 susant joined #gluster
06:22 victori joined #gluster
06:24 rafi1 joined #gluster
06:33 kdhananjay joined #gluster
06:36 kramdoss_ joined #gluster
06:37 msvbhat joined #gluster
06:38 jkroon joined #gluster
06:38 Wizek_ joined #gluster
06:39 prasanth joined #gluster
06:46 victori joined #gluster
06:56 aravindavk joined #gluster
06:56 msvbhat joined #gluster
07:10 Jacob843 joined #gluster
07:13 mb_ joined #gluster
07:17 Shu6h3ndu joined #gluster
07:18 kdhananjay joined #gluster
07:24 msvbhat joined #gluster
07:31 k4n0 joined #gluster
07:34 dahu33 joined #gluster
07:36 jtux joined #gluster
07:43 rafi1 joined #gluster
07:49 akay JoeJulian what would you class as lots of RAM for ZFS?
07:58 kshlm joined #gluster
08:03 kshlm joined #gluster
08:07 shutupsquare joined #gluster
08:07 shutupsquare joined #gluster
08:12 pulli joined #gluster
08:17 k4n0 joined #gluster
08:18 unlaudable joined #gluster
08:27 apandey joined #gluster
08:28 jri joined #gluster
08:32 fsimonce joined #gluster
08:37 [diablo] joined #gluster
08:38 unlaudable joined #gluster
08:40 sona joined #gluster
08:49 rastar joined #gluster
08:50 victori joined #gluster
08:52 k4n0 joined #gluster
08:53 unlaudable joined #gluster
08:55 ahino joined #gluster
08:59 Iouns joined #gluster
08:59 musa22 joined #gluster
09:04 gyadav_ joined #gluster
09:06 sanoj joined #gluster
09:12 Philambdo joined #gluster
09:15 karthik_us joined #gluster
09:24 pulli joined #gluster
09:26 rafi1 joined #gluster
09:31 shutupsquare joined #gluster
09:42 karthik_us joined #gluster
09:43 ShwethaHP joined #gluster
09:44 pulli joined #gluster
09:45 victori joined #gluster
09:47 flying joined #gluster
09:49 jri joined #gluster
09:54 armyriad joined #gluster
09:55 jtux joined #gluster
10:00 victori joined #gluster
10:12 shutupsquare joined #gluster
10:12 jkroon joined #gluster
10:46 panina joined #gluster
10:50 jri joined #gluster
11:03 jri joined #gluster
11:10 skumar joined #gluster
11:12 skumar_ joined #gluster
11:13 derjohn_mob joined #gluster
11:24 Akram joined #gluster
11:24 Akram ping, Hi all, is there a way to change the Storage hosname for a node?
11:27 jri joined #gluster
11:27 msvbhat joined #gluster
11:30 rastar joined #gluster
11:38 musa22 joined #gluster
11:42 saybeano joined #gluster
11:58 rafi joined #gluster
12:05 bfoster joined #gluster
12:11 bfoster joined #gluster
12:13 karthik_us joined #gluster
12:18 nishanth joined #gluster
12:18 rafi joined #gluster
12:27 victori joined #gluster
12:37 kettlewell joined #gluster
12:41 shyam joined #gluster
12:45 skoduri joined #gluster
12:53 ankit_ joined #gluster
13:02 kettlewell left #gluster
13:03 skoduri joined #gluster
13:04 musa22 joined #gluster
13:05 ppai joined #gluster
13:12 unclemarc joined #gluster
13:13 k4n0 joined #gluster
13:21 Seth_Karlo joined #gluster
13:22 Seth_Karlo joined #gluster
13:34 msvbhat joined #gluster
13:55 olia joined #gluster
13:56 msvbhat joined #gluster
13:59 shyam joined #gluster
14:00 XpineX joined #gluster
14:01 olia Hi! I need some help on a brick error. In brick logs I have a "mismatching volume-id <ID1> received. already is part of volume <ID2>" error after redeploying. The strange thing is that from the gluster info, this volume has ID1 and ID2 is an id of another volume. What does this error mean?
14:02 ira joined #gluster
14:02 musa22 joined #gluster
14:03 nbalacha joined #gluster
14:07 pioto joined #gluster
14:08 buvanesh_kumar joined #gluster
14:19 tallmocha joined #gluster
14:22 Humble joined #gluster
14:23 tallmocha joined #gluster
14:25 tallmocha Is tiering hot/cloud in the latest stable Gluster? Searching online is lacking infomation :?
14:25 plarsen joined #gluster
14:26 DV joined #gluster
14:30 skylar joined #gluster
14:30 neofob joined #gluster
14:35 sbulage joined #gluster
14:38 shaunm joined #gluster
14:41 nishanth joined #gluster
14:42 federicoaguirre joined #gluster
14:42 federicoaguirre Hi there.! I have a doubt.!
14:42 federicoaguirre I have a 2 nodes cluster replicated
14:43 federicoaguirre I'm using one of them to copy my data...
14:43 federicoaguirre What should I do to get it replicated?
14:48 victori joined #gluster
14:51 rwheeler joined #gluster
14:53 JoeJulian Akram: not easily and not without downtime. You would have to stop your volumes, stop glusterd, sed replace the hostname in every file under /var/lib/glusterd on every server, then start it all up again.
14:53 Akram JoeJulian: very bad :(
14:54 JoeJulian olia: The bricks are marked with the volume-id with an ,,(extended attribute) to prevent bricks being corrupted if they're mounted in the wrong place.
14:54 glusterbot olia: To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}
14:55 JoeJulian tallmocha: yes
14:56 zoyvind_ joined #gluster
14:56 tallmocha @JoeJulian awesome, are there any docs on this, I found a few blog posts and stuff from Red Hat, but nothing in the official docs
14:56 JoeJulian federicoaguirre: The FS in GlusterFS is filesystem. You have to mount it to use its features. Writing to a brick is the rough equivalent of dd'ing data to the middle of and ext4 formatted partition and expecting a directory entry to appear for it.
14:56 JoeJulian s/and/an/
14:56 glusterbot What JoeJulian meant to say was: federicoaguirre: The FS in GlusterFS is filesystem. You have to mount it to use its features. Writing to a brick is the rough equivalent of dd'ing data to the middle of an ext4 formatted partition and expecting a directory entry to appear for it.
14:56 mckendr Sounds like you want ceph. :p
14:58 ankit_ joined #gluster
14:58 JoeJulian tallmocha: I'm sure I've seen some, but I just used up the two minutes I had before I have to leave for a meeting and commute. I'll see what I can find when I get to the office.
14:59 tallmocha @JoeJulian my main question is what format is recommend for the NVMe hot tier. XFS? Ah okay, thanks!
15:00 JoeJulian My personal preference for NVMe is btrfs
15:02 federicoaguirre sounds good.!
15:02 federicoaguirre I'll gonna wait.!
15:03 olia JoeJulian: Thank you very much!
15:04 msvbhat joined #gluster
15:06 shruti joined #gluster
15:06 atinmu joined #gluster
15:06 sac joined #gluster
15:07 Prasad joined #gluster
15:08 Prasad_ joined #gluster
15:09 farhorizon joined #gluster
15:09 Prasad__ joined #gluster
15:10 zoyvind joined #gluster
15:10 sanoj joined #gluster
15:11 lalatenduM joined #gluster
15:11 Gambit15 JoeJulian, re Akram's FQDN change, would it be sufficient to maintain DNS records for both the old & new FQDN & then update all of the FQDNs in glusterd with everything live, thereby being able to restart the nodes one-by-one & not having to restart them all at once...?
15:12 shruti joined #gluster
15:14 Akram___ joined #gluster
15:14 Akram___ JoeJulian: what about additional names? how are they used? can we update them also ?
15:15 Akram___ JoeJulian: are you familiar with the openshift installation mode ?
15:16 Gambit15 Akram___, he's afk, however I just queried something else of that may be of use to you...
15:16 Gambit15 Gambit15> JoeJulian, re Akram's FQDN change, would it be sufficient to maintain DNS records for both the old & new FQDN & then update all of the FQDNs in glusterd with everything live, thereby being able to restart the nodes one-by-one & not having to restart them all at once...?
15:19 Wizek_ joined #gluster
15:20 lalatend1M joined #gluster
15:20 sac` joined #gluster
15:20 Prasad_ joined #gluster
15:21 Gambit15 joined #gluster
15:22 Prasad__ joined #gluster
15:26 lalatenduM joined #gluster
15:26 msvbhat joined #gluster
15:30 sac joined #gluster
15:30 shyam joined #gluster
15:37 skylar joined #gluster
15:42 tallmocha I'm running 3.7.8, we have a distributed replica cluster, with a pair of 1TB drives and a pair of 500GB drives. The volume is showing as 92GB free but we have started getting no space left on disk errors.
15:43 tallmocha The 1TB is full, but the 500GB has the 92GB free on it. Any reason for the errors?
15:45 TvL2386 joined #gluster
15:45 vbellur joined #gluster
15:49 farhoriz_ joined #gluster
15:56 sanoj tallmocha, do u have quota configured on the volume?
16:00 tallmocha @sanoj nope, should I?
16:00 annettec joined #gluster
16:00 sanoj no I just asked because that could be a reason
16:00 sanoj tallmocha, ^
16:02 tallmocha @sanoj ah I wasn't sure if it was needed. Trying to think of something to use the last available space. We are moving to a new gluster cluster but files are still being used on this one and the issue has only started showing today :/
16:03 sanoj tallmocha, what is the space usage on individual bricks. Is it that some bricks are full while others arent
16:05 tallmocha @sanoj yes, the larger brick pair is full, the smaller pair has space left
16:06 sanoj tallmocha, I was thinking it would be other way around. I am not sure if the dht file placement does consider brick size
16:09 Telsin joined #gluster
16:09 sanoj tallmocha, One thing you could probably try is to have carve out 2 bricks of 1 each TB drives, so that all your bricks are of equal size.
16:12 guhcampos joined #gluster
16:12 tallmocha @sanoj same. I thought it did though, reading online "Prior to 3.6, bricks with heterogeneous sizes were treated as equal regardless of size, and would have been assigned an equal share of files. From 3.6, assignment of files to bricks will take into account the sizes of the bricks."
16:12 tallmocha Unless that is something else?
16:14 squizzi joined #gluster
16:14 sanoj tallmocha, hmm need to check if it applies to distributed replicate.
16:16 pulli joined #gluster
16:17 susant joined #gluster
16:17 shutupsq_ joined #gluster
16:19 nbalacha joined #gluster
16:20 sanoj tallmocha, does look like it is supported on dist-rep http://lists.gluster.org/pipermail/gluster-users/2014-January/015660.html.
16:20 tallmocha @sanoj I can't seem to find anything. But for the % used surly we would have run into the issue much sooner and not at 92%
16:21 sanoj tallmocha, correct
16:21 tallmocha Ah cool. I'm wondering if the rebalance didn't work when we originally added the new pair of bricks
16:22 sanoj tallmocha, cool.
16:22 shutupsquare joined #gluster
16:23 Karan joined #gluster
16:24 plarsen joined #gluster
16:25 farhorizon joined #gluster
16:28 ankit_ joined #gluster
16:28 rastar joined #gluster
16:30 farhoriz_ joined #gluster
16:37 xMopxShell joined #gluster
16:50 gyadav_ joined #gluster
16:51 jdossey joined #gluster
16:57 bbooth joined #gluster
16:59 pioto joined #gluster
17:00 pulli joined #gluster
17:01 farhorizon joined #gluster
17:01 kpease joined #gluster
17:07 nbalacha joined #gluster
17:11 ankit_ joined #gluster
17:19 cacasmacas joined #gluster
17:31 neofob joined #gluster
17:31 tyler274 joined #gluster
17:39 Seth_Kar_ joined #gluster
17:49 susant left #gluster
17:51 edong23 joined #gluster
17:58 kpease joined #gluster
17:59 bbooth joined #gluster
18:10 guhcampos joined #gluster
18:30 farhoriz_ joined #gluster
18:31 JoeJulian Gambit15: No, glusterd will fail to start if the configuration between peers doesn't match.
18:32 pulli joined #gluster
18:33 JoeJulian And everyone else is gone, so that's an easy catch up.
18:33 stomith joined #gluster
18:38 kpease joined #gluster
18:39 jri joined #gluster
18:39 kpease_ joined #gluster
18:44 bbooth joined #gluster
19:02 neofob left #gluster
19:07 fcoelho joined #gluster
19:23 ahino joined #gluster
19:32 gem joined #gluster
19:35 calisto joined #gluster
19:46 cholcombe are the samba vfs modules for gluster shipping with ubuntu xenial?
19:48 cholcombe looks like it landed in samba 4.4
20:14 derjohn_mob joined #gluster
20:16 ic0n joined #gluster
20:18 tallmocha joined #gluster
20:20 tallmocha Going to be testing some NVMe PCIe drives as a hot tier tomorrow on a new cluster build. Should I go for xfs, ext4 or brtfs? I'm using xfs on the spinning disks currently.
20:27 DV joined #gluster
20:30 jwd joined #gluster
20:32 JoeJulian I still prefer btrfs for the NVMe. My second choice would be xfs.
20:32 bbooth joined #gluster
20:37 tallmocha @JoeJulian for a filesystem noob could you explain why? Would it be advised to use the same on the cold tier or will it not matter?
20:41 ahino joined #gluster
20:45 DV joined #gluster
20:59 bbooth joined #gluster
21:05 JoeJulian tallmocha: To do so with any quality would take more time than I have available at the moment. Sounds like a good blog post though, maybe I'll do that.
21:06 tallmocha @JoeJulian understood, I would be interested in that. Will see if there is any infomation online.
21:07 pulli joined #gluster
21:09 PaulCuzner joined #gluster
21:09 PaulCuzner left #gluster
21:16 Gambit15 tallmocha, I don't know if btrfs has any particular changes that make it better for SSD use, but I personally would be wary of it for storing important data on traditional disks because it's still considered a relatively young FS. I use ZFS for our backup servers (better integrity at the cost of performance) & XFS on our production servers (better performance than EXT4)
21:17 Gambit15 BRTFS has been in development for a good while, but they've spent a lot more time on adding new features than on maturing & stabilising exiting ones
21:21 tallmocha @Gambit15 thanks for the input. I might give XFS a go then first, it's what we're currently using on our spinning disks. Is XFS still Gluster's recommendation?
21:23 Gambit15 Can't say. FWIW, I've had great responses from the xfs channel. Very helpful
21:23 tallmocha Cool, thanks
21:35 victori joined #gluster
21:40 JoeJulian btrfs is much more robust than it was 3 years ago. I use it in production.
21:50 vbellur1 joined #gluster
22:12 jri joined #gluster
22:33 pulli joined #gluster
22:45 stomith joined #gluster
22:53 farhorizon joined #gluster
23:00 musa22 joined #gluster
23:13 bowhunter joined #gluster
23:16 vbellur joined #gluster
23:22 farhoriz_ joined #gluster
23:31 tallmocha joined #gluster
23:52 nishanth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary