Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-02-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:05 sebamontini joined #gluster
00:07 merp_ joined #gluster
00:09 jhyland joined #gluster
00:11 john51 joined #gluster
00:21 Wizek joined #gluster
00:23 DV joined #gluster
00:24 ovaistariq joined #gluster
00:31 semajnz joined #gluster
00:43 jhyland Is there a way to test the IO speed of gluster volume from a client?
01:01 haomaiwa_ joined #gluster
01:18 merp_ joined #gluster
01:20 EinstCrazy joined #gluster
01:21 nathwill joined #gluster
01:23 haomaiwa_ joined #gluster
01:26 julim joined #gluster
01:27 Lee1092 joined #gluster
01:32 delhage joined #gluster
01:34 johnmilton joined #gluster
01:59 haomaiwa_ joined #gluster
02:01 haomaiwa_ joined #gluster
02:24 nthomas joined #gluster
02:24 tyrok_laptop joined #gluster
02:25 tyrok_laptop Hi!  How do I enable AFRv2 on a volume in Gluster 3.7.8?  I thought it was automatic, but I've seen mention of people needing to specifically enable it, and I'm seeing locking behavior during self-heal which looks a lot like AFRv1's locking.
02:25 harish joined #gluster
02:26 hagarth joined #gluster
02:29 DV joined #gluster
02:32 natarej joined #gluster
02:33 johnmilton joined #gluster
02:35 DV joined #gluster
02:37 DV joined #gluster
02:37 tyrok_laptop Even better would be AFRv3, but I don't know how Gluster determines which one to use.  It's a two-node replica=2 cluster with both nodes on 3.7.8.  cluster.op-version is at 30603.
02:40 DV joined #gluster
02:42 tyrok_laptop I suppose my question is more "how do I determine the version I'm at to confirm a hunch" rather than "how to I force it to a particular version".  Anywway, not impatient for an answer, just want to make sure there's enough info to answer it.
02:46 plarsen joined #gluster
02:50 nehar joined #gluster
02:57 hchiramm_ joined #gluster
03:00 DV joined #gluster
03:01 haomaiwa_ joined #gluster
03:02 DV joined #gluster
03:20 DV joined #gluster
03:26 ira joined #gluster
03:28 shubhendu joined #gluster
03:38 hagarth joined #gluster
03:39 hagarth left #gluster
03:47 atinm joined #gluster
03:48 RameshN joined #gluster
03:50 nbalacha joined #gluster
04:01 haomaiwang joined #gluster
04:02 itisravi joined #gluster
04:03 itisravi joined #gluster
04:10 kanagaraj joined #gluster
04:11 rafi joined #gluster
04:15 jhyland joined #gluster
04:18 aravindavk joined #gluster
04:21 arcolife joined #gluster
04:27 karthikfff joined #gluster
04:28 kotreshhr joined #gluster
04:29 skoduri joined #gluster
04:36 haomaiwang joined #gluster
04:43 sakshi joined #gluster
04:47 ppai joined #gluster
04:54 kotreshhr joined #gluster
04:58 gem joined #gluster
05:01 pppp joined #gluster
05:01 ndarshan joined #gluster
05:01 haomaiwa_ joined #gluster
05:02 aravindavk joined #gluster
05:02 Saravanakmr joined #gluster
05:02 nehar joined #gluster
05:04 merp_ joined #gluster
05:05 jiffin joined #gluster
05:06 poornimag joined #gluster
05:16 jhyland joined #gluster
05:18 ovaistar_ joined #gluster
05:20 nehar joined #gluster
05:20 vmallika joined #gluster
05:22 Apeksha joined #gluster
05:30 Bhaskarakiran joined #gluster
05:36 karnan joined #gluster
05:45 ashiq_ joined #gluster
05:49 nbalacha joined #gluster
05:50 shubhendu joined #gluster
05:54 ggarg joined #gluster
05:59 aravindavk joined #gluster
06:01 haomaiwa_ joined #gluster
06:04 kdhananjay joined #gluster
06:10 shubhendu joined #gluster
06:11 calavera joined #gluster
06:11 nbalacha joined #gluster
06:13 ramky joined #gluster
06:14 nthomas joined #gluster
06:19 hgowtham joined #gluster
06:25 jhyland joined #gluster
06:28 nehar joined #gluster
06:29 shubhendu joined #gluster
06:32 kshlm joined #gluster
06:32 kdhananjay joined #gluster
06:33 coredump joined #gluster
06:36 atalur joined #gluster
06:43 gem joined #gluster
06:49 overclk joined #gluster
06:51 sakshi joined #gluster
06:52 shubhendu joined #gluster
07:01 haomaiwa_ joined #gluster
07:03 gowtham joined #gluster
07:05 skoduri joined #gluster
07:21 jtux joined #gluster
07:25 robb_nl joined #gluster
07:26 hackman joined #gluster
07:27 [diablo] joined #gluster
07:33 kdhananjay1 joined #gluster
07:34 itisravi joined #gluster
07:36 [Enrico] joined #gluster
07:43 robb_nl joined #gluster
07:46 hchiramm joined #gluster
07:47 Simmo joined #gluster
07:48 Simmo Good morning Guys : -)
07:49 atalur joined #gluster
07:49 Simmo I have posted already in the mailing list, but since my anxiety starts to increase I would bother you also here :-/
07:49 Simmo So far I have a replicated volume setup with 3 nodes (volume created with ... replica 3 <host1:brick1> <host2:brick1><host3:brick1>)
07:50 Simmo Since the load will go super high I need to create 17 EC2 instances
07:51 Simmo Should I create a "replica 17 " ? Keep in mind that the application needs to load local files in memory ... that's the reason for the replication (i.e. having on each instance files locally)
07:51 Simmo So sorry for those dummy questions :_/
08:01 haomaiwang joined #gluster
08:03 deniszh joined #gluster
08:14 hgowtham joined #gluster
08:19 aravindavk joined #gluster
08:23 Manikandan joined #gluster
08:27 enzob joined #gluster
08:28 jri joined #gluster
08:30 ivan_rossi joined #gluster
08:33 hgowtham joined #gluster
08:33 fsimonce joined #gluster
08:36 honzik666 joined #gluster
08:37 atalur joined #gluster
08:46 d4n13L no you shouldnt
08:46 d4n13L why would you want to have 17 replicas!?
08:46 post-factum remember, each extra replica multiplies your client traffic
08:47 post-factum probably, one would like to enjoy replica 3 + georeplication
08:48 d4n13L Simmo: Joe did a pretty good blog post about that a couple of years back, though still applies: https://joejulian.name/blog/glusterfs-replication-dos-and-donts/
08:48 glusterbot Title: GlusterFS replication do's and don'ts (at joejulian.name)
08:48 nbalacha joined #gluster
08:49 Simmo Nice, thanks guys! I think I start to understand : )
08:50 Simmo I was missing a bit the NFS concept behind it. I think I would do a 3 + 1 arbiter setup
08:50 Simmo and then mount the FS on the clients
08:50 Slashman joined #gluster
08:50 Simmo But, physically what happens when a client try to "read" a file ? Is it transferred over the network ? A copy is kept locally ?
08:51 Simmo In my use case, the application calls a binary and that binary load in memory a file...
08:52 Simmo Lol, from the blog "What's a poor way to use replication? => A copy on every server"
08:53 Simmo :D
08:54 d4n13L yeah, because it's simply makes no sense at all :)
08:55 Simmo I just realized it : )
08:55 bfm joined #gluster
08:55 Simmo At least if Joe had to write a blog post, it means I'm not the only newbie around :-p
08:56 hgowtham joined #gluster
09:01 haomaiwa_ joined #gluster
09:02 JonB joined #gluster
09:05 JonB my sequential write speed is higher than my sequential read speed? 3 disks in 2 machines, replicate=2 stripe=3, mounting bluster locally on 1 machine with 2 net cards, one used for bluster, the other for samba that exports to a 3. machine which is seeing write faster than read? (tested using dd /dev/zero and bs=128k or 64k
09:05 itisravi Simmo: btw, it is not 3+1 for the arbiter volume. It is a replica 3 volume but the 3rd brick is the arbiter brick. So its more of a 2 + 1.
09:09 Simmo itisravi: ops, you're right. It would make sense in my scenario to have a replica with 4 nodes (3 + 1).. ?
09:10 Simmo from the mailing list it has been suggested that even this might be overkilling
09:10 itisravi Simmo: no, just 3 nodes is fine..
09:10 itisravi Simmo: yeah I wrote that ;)
09:10 Simmo Soooo!!! Super thanks then!
09:11 Simmo Your posts helped me to put my brain under work : )
09:11 itisravi cool!
09:11 muneerse joined #gluster
09:16 hchiramm joined #gluster
09:27 jbrooks joined #gluster
09:28 gem joined #gluster
09:29 ctria joined #gluster
09:48 haomai___ joined #gluster
09:51 nbalacha joined #gluster
09:52 spalai joined #gluster
09:58 Manikandan joined #gluster
10:01 hgowtham joined #gluster
10:01 haomaiwa_ joined #gluster
10:01 spalai left #gluster
10:04 ashiq_ joined #gluster
10:07 EinstCrazy joined #gluster
10:11 hgowtham joined #gluster
10:19 satheesaran_ joined #gluster
10:35 nbalacha joined #gluster
10:37 JonB FYI: found the error in my low read performance, only half the bricks was online :-(  luckily still only testing the system, learning every day :-)
10:44 jotun joined #gluster
10:56 drankis joined #gluster
11:01 haomaiwa_ joined #gluster
11:02 mhulsman joined #gluster
11:14 at0r joined #gluster
11:16 at0r Hi, I want to know if it is possible to turn an existing directory with files on a xfs partition into a brick to be used in a glusterfs volume.
11:17 harish_ joined #gluster
11:18 Manikandan joined #gluster
11:18 at0r I tried 'gluster volume create test replica 2 gluster0:/mnt/brick0/test gluster1:/mnt/brick0/test' where /mnt/brick0/test on gluster0 already had data in it. But that didn't work.
11:24 mhulsman joined #gluster
11:25 itisravi at0r: no, bricks must not contain any data beforehand.
11:27 at0r itisravi: thank you.
11:35 post-factum at0r, itisravi: actually, it may, and self-heal *should* (not must) manage it, but the creation must be done with "force"
11:35 post-factum no guarantees, btw
11:36 post-factum i restored deleted volume once with "create force", and it worked for me
11:37 post-factum never do that unless you know what you are doing
11:39 itisravi post-factum: it is  recommended not to do that. You never know if assigning gfids etc would always be supported for files that you 'added' to the brick directly :)
11:42 kkeithley1 joined #gluster
11:42 mhulsman joined #gluster
11:43 JonB itisravi, but if he creates a brick folder on the same disk, creates a volume with that brick folder, mounts the same bluster volume somewhere else, can't he move files from that disk_but_not_in_brick_folder to the new mount point?
11:43 EinstCrazy joined #gluster
11:47 at0r post-factum: so i tried to create the volume with force parameter. it seems to be working :)
11:47 itisravi JonB: that is possible..all I'm saying is don't add files directly to the bricks of a volume or attach bricks that already contain data to an existing volume.
11:47 post-factum at0r: from now, you are on your own :D
11:48 at0r i saw this message: https://www.gluster.org/pipermail/gluster-users/2012-June/010382.html
11:48 glusterbot Title: [Gluster-users] Existing Data and self mounts ? (at www.gluster.org)
11:48 itisravi lookup from mount assigns gfids for a file if it is not present but that should not be an incentive to go to the backend and add stuff..
11:49 at0r the manual self-heal seems to replicate all data nicely :)
11:49 at0r it's a lab machine btw ;)
11:51 itisravi obligatory xkcd reference: https://xkcd.com/1172/
11:51 glusterbot Title: xkcd: Workflow (at xkcd.com)
11:51 itisravi ;)
11:55 lupine that's how I feel at work right now
11:56 lupine (as the commenter, not as the author)
12:01 robb_nl joined #gluster
12:01 haomaiwa_ joined #gluster
12:03 Manikandan_ joined #gluster
12:06 at0r hmm, it seems not all files were replicated
12:07 post-factum itisravi++
12:07 glusterbot post-factum: itisravi's karma is now 5
12:07 at0r stat: cannot access /mnt/rsync/var/www/pipaa: Invalid argument
12:13 nehar joined #gluster
12:14 ashiq_ joined #gluster
12:17 pppp joined #gluster
12:21 sebamontini joined #gluster
12:31 johnmilton joined #gluster
12:31 spalai joined #gluster
12:39 Manikandan joined #gluster
12:44 armyriad joined #gluster
12:50 kanagaraj joined #gluster
12:50 sathees joined #gluster
12:55 armyriad joined #gluster
12:57 kanagaraj joined #gluster
13:05 tyrok_laptop How does Gluster determine which AFR version to run at?  I've got a two-node replica setup (no arbiter) on 3.7.8, and when self-heal happens, clients' I/O on the volume (even doing a simple "ls -l") freezes until the self-heal completes.  I'm wondering if maybe it's running at AFRv1 for some reason - the locking semantics of it seem to match the behavior I'm seeing.
13:11 n0b0dyh3r3 joined #gluster
13:11 jhyland joined #gluster
13:12 sebamontini joined #gluster
13:14 unclemarc joined #gluster
13:15 ppai joined #gluster
13:16 vmallika joined #gluster
13:18 pdrakeweb joined #gluster
13:32 jiffin joined #gluster
13:36 skoduri joined #gluster
13:40 bennyturns joined #gluster
13:46 JonB left #gluster
13:55 Upgreydd Hi. I have a question. Is GlusterFS mainly used in some Virtualzation Environment Distro - Proxmox etc.?
13:57 nbalacha joined #gluster
13:59 sebamontini joined #gluster
13:59 ndevos Upgreydd: I'm not sure about "mainly", but there are Proxmox, oVirt, CloudStack and other projects that use Gluster for VM storafe
13:59 ndevos *storage even
13:59 mikemol joined #gluster
14:05 Saravanakmr joined #gluster
14:06 nehar joined #gluster
14:09 Upgreydd ndevos: I have only 2 nodes, i need VE and storage on each node
14:11 dlambrig_ joined #gluster
14:12 jhyland joined #gluster
14:13 sebamontini joined #gluster
14:13 ashiq_ joined #gluster
14:14 spalai joined #gluster
14:15 ndevos Upgreydd: you really want an unequal number of storage servers, or at least a small 3rd server that can function as tie-breaker in case of network partitioning
14:17 renout_away joined #gluster
14:18 kdhananjay joined #gluster
14:23 theron joined #gluster
14:23 sebamontini joined #gluster
14:24 gowtham joined #gluster
14:27 Wizek joined #gluster
14:28 baoboa joined #gluster
14:29 hamiller joined #gluster
14:31 kdhananjay joined #gluster
14:36 chirino joined #gluster
14:37 theron joined #gluster
14:38 plarsen joined #gluster
14:40 skylar joined #gluster
14:40 nthomas joined #gluster
14:43 gem joined #gluster
14:44 hchiramm joined #gluster
14:59 arcolife joined #gluster
15:00 theron joined #gluster
15:01 nbalacha joined #gluster
15:04 Manikandan joined #gluster
15:04 PsionTheory joined #gluster
15:09 sebamontini joined #gluster
15:10 robb_nl joined #gluster
15:15 Ulrar joined #gluster
15:19 sebamontini joined #gluster
15:19 hgowtham joined #gluster
15:26 theron_ joined #gluster
15:27 robb_nl joined #gluster
15:28 jhyland joined #gluster
15:28 fsimonce joined #gluster
15:35 nbalacha joined #gluster
15:36 rafi joined #gluster
15:39 farhorizon joined #gluster
15:47 raghu joined #gluster
15:48 theron joined #gluster
15:54 deniszh joined #gluster
15:56 jhyland joined #gluster
16:07 Gaurav_ joined #gluster
16:14 jiffin joined #gluster
16:24 haomaiwang joined #gluster
16:26 drankis joined #gluster
16:26 Ulrar joined #gluster
16:35 nathwill joined #gluster
16:38 kanagaraj joined #gluster
16:45 sebamontini joined #gluster
16:46 dlambrig_ joined #gluster
16:55 shubhendu joined #gluster
17:01 haomaiwa_ joined #gluster
17:05 calavera joined #gluster
17:05 F2Knight joined #gluster
17:09 dlambrig_ joined #gluster
17:16 Manikandan joined #gluster
17:21 merp_ joined #gluster
17:24 dlambrig_ left #gluster
17:46 sebamontini joined #gluster
17:49 hackman joined #gluster
17:53 Upgreydd Hi guys, one question. I know that glusterFS require 3 nodes without advisor. What's best practice for two nodes? Two nodes with advisor or Three nodes with storage, one without storage for requirements satisfaction?
17:56 ivan_rossi left #gluster
17:58 Upgreydd Better way would be two barebone with glusterfs and advisor or two barebone with gluster and one virtualized on third server?
18:01 haomaiwa_ joined #gluster
18:08 Upgreydd I mean arbiter, not advisor ;)
18:10 julim joined #gluster
18:11 Upgreydd ndevos: what would be better? ;)
18:11 JoeJulian Three nodes, and I do mean "nodes" by definition as what you're looking for are actual separate network endpoints, one with a minimum amount of storage as an arbitor. It will need some in order to store the metadata.
18:12 Upgreydd JoeJulian: can I calculate third node size somehow? I have two 14TB nodes and third 250GB only :/
18:13 JoeJulian Should be one inode for every filename and directory, I think.
18:14 hagarth joined #gluster
18:14 jiffin joined #gluster
18:15 rafi joined #gluster
18:15 armyriad joined #gluster
18:16 Upgreydd JoeJulian: I'm just planning mate. I haven
18:17 Upgreydd I haven't expirience with GlusterFS, that's why i'm asking... what's inode size?
18:17 hamiller joined #gluster
18:18 JoeJulian Depends on the filesystem used for your bricks. I generally use xfs with an inode size of 512 bytes.
18:40 haomaiwa_ joined #gluster
18:44 nthomas joined #gluster
18:45 CyrilPeponnet Hi Guys, one quick question: what E [glusterd-op-sm.c:207:glusterd_get_txn_opinfo] 0-: Unable to get transaction opinfo for transaction ID : f6d668d5-007f-4cbb-bc8d-2faf61ec458d means
18:45 DaKnOb joined #gluster
18:47 CyrilPeponnet and also @JoeJulian I had a strange issue with one vm using libgfapi, looks like the fs hang for more than 120s. But only this vm. Other vms on same hypervisors went fine. But this particular vm is quite chatty writtings files. What could I check?
18:47 DaKnOb joined #gluster
18:49 CyrilPeponnet http://pastebin.com/NTrFzeer
18:49 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:49 CyrilPeponnet http://fpaste.org/329391/42618514/ sorry @glusterbot
18:49 glusterbot Title: #329391 Fedora Project Pastebin (at fpaste.org)
18:54 merp_ joined #gluster
18:55 neofob joined #gluster
19:07 lupine joined #gluster
19:12 rafi CyrilPeponnet: What type of bricks are you using ?
19:13 CyrilPeponnet xfs on top of raid 0
19:13 CyrilPeponnet volume replica 2
19:16 CyrilPeponnet @rafi ?
19:18 rafi CyrilPeponnet: are you using thinlv ?
19:21 hchiramm joined #gluster
19:41 squizzi_ joined #gluster
19:43 CyrilPeponnet I don't think so
19:44 CyrilPeponnet it's qcow based for vm images
19:45 haomaiwa_ joined #gluster
19:47 theron joined #gluster
20:05 sebamontini joined #gluster
20:25 ro_ joined #gluster
20:28 DaKnOb joined #gluster
20:28 ro_ hey guys - I had my cluster slowly go down today and I'm having troubling bringing it back up. Would a server refusing a connection keep the whole cluster from coming back up?
20:29 ttkg joined #gluster
20:29 ro_ the first error I get in the logs is a "failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick subprocess is running.
20:29 ro_ running that command has the expected output
20:30 ro_ then it gets a connection refuse from one node, and a connection timed out from a separate
20:31 CyrilPeponnet iptables rules ?
20:37 CyrilPeponnet @JoeJulian do you recommand cache=None for kvm qcow2 disks over libgfapi ?
20:40 JoeJulian I've never recommended qcow2.
20:41 CyrilPeponnet forget gcow2, for caching
20:41 mowntan joined #gluster
20:42 CyrilPeponnet I have vm that hang on io flush, I will tweak the vm.dirty_ratio and vm.dirty_backgroud_ratio but maybe cache=none could also help
20:42 JoeJulian Depends on the use case. I'll use caching if it fits. As a general purpose rule, though, none.
20:42 ro_ iptable rules look fine
20:42 ro_ 24007 is open
20:42 lupine joined #gluster
20:45 JoeJulian @ports
20:45 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
20:45 Upgreydd JoeJulian: I can use XFS in whole storage on each barebone (2 pcs), but how about third server (virtualized in Hyper-V), what's reliable XFS storage size for third node? I want to have data replicated only between barebones
20:46 sebamontini joined #gluster
20:47 Upgreydd JoeJulian: and second question, can I use server to server bonding or better use switch between?
20:49 JoeJulian You _can_, but you don't get much from that. Each client connects directly to all the bricks in the filesystem.
20:50 JoeJulian As for what's a "reliable" xfs size, I've not had reliability problems with XFS in a decade. Any size will do.
20:50 merp_ joined #gluster
20:50 haomaiwa_ joined #gluster
20:53 Upgreydd JoeJulian: "reliable" i mean that third server will not contain replicated data, only checksums and voting, do i need it connected via bonded interface (4-8gbit) or for this purpose 1 gbit will be enaugh. As I understand i need to contain checksums (inodes) on third server, so what size would be enaugh for that?
20:54 JoeJulian That depends on how many files and directories you will have. I cannot predict that number.
20:55 mhulsman joined #gluster
20:57 Upgreydd JoeJulian: i understand that, but for example, one VM with snapshots, configs, backups etc. is something about ~ 100 files and dirs. each file checksum as I understand = 512bit ? That's would be about 0,0064Megabyte correct?
20:59 sebamontini joined #gluster
20:59 Upgreydd so 10 mln files and dirs would be 640Mbytes. Am I thinking in good way?
21:00 Upgreydd thinking this way 10GB would be beyond my purposes
21:03 post-factum JoeJulian: do you have any experience with arbiter volume in production?
21:04 Upgreydd post-factum: Hi ;)
21:05 post-factum JoeJulian: I would like to wonder what actually is being stored on arbiter node, and what disk I/O capacity it it required to have for arbiter node
21:05 post-factum s/it it/it is/
21:05 glusterbot What post-factum meant to say was: JoeJulian: I would like to wonder what actually is being stored on arbiter node, and what disk I/O capacity it is required to have for arbiter node
21:05 post-factum Upgreydd: hello
21:05 post-factum @arbiter
21:05 post-factum no luck with bot :)
21:06 post-factum glusterbot: arbiter
21:19 JoeJulian post-factum: no. I've just put together a test volume, created a bunch of files, and looked to see how it works.
21:19 JoeJulian Upgreydd: yes, that's more-or-less correct.
21:19 JoeJulian Upgreydd: leave a margin for error.
21:20 theron joined #gluster
21:44 lupine joined #gluster
21:46 theron joined #gluster
21:55 haomaiwa_ joined #gluster
21:55 Wizek joined #gluster
22:08 post-factum JoeJulian: will do the same
22:28 DV joined #gluster
22:34 john51_ joined #gluster
22:41 renout_away joined #gluster
22:45 merp_ joined #gluster
22:50 cyberbootje joined #gluster
22:56 sebamontini joined #gluster
22:59 haomaiwa_ joined #gluster
23:26 bitchecker joined #gluster
23:37 theron joined #gluster
23:49 DV__ joined #gluster
23:53 chirino joined #gluster
23:56 merp_ joined #gluster
23:57 calavera joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary