Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 sage joined #gluster
00:06 virusuy joined #gluster
00:12 MugginsM joined #gluster
00:34 virusuy joined #gluster
00:38 haomaiwa_ joined #gluster
00:44 virusuy joined #gluster
00:53 T3 joined #gluster
00:54 osc_khoj joined #gluster
00:55 osc_khoj msg nickServ identify feb0@feb
00:55 osc_khoj sorry..^^
01:00 badone_ joined #gluster
01:01 bala joined #gluster
01:06 virusuy joined #gluster
01:06 virusuy joined #gluster
01:18 virusuy joined #gluster
01:40 wkf joined #gluster
01:41 virusuy joined #gluster
01:50 hagarth joined #gluster
01:50 virusuy joined #gluster
01:53 T3 joined #gluster
01:55 harish joined #gluster
02:02 virusuy joined #gluster
02:02 virusuy joined #gluster
02:06 T3 joined #gluster
02:12 virusuy joined #gluster
02:20 haomaiwa_ joined #gluster
02:23 virusuy joined #gluster
02:46 soumya_ joined #gluster
02:49 bharata-rao joined #gluster
02:53 virusuy joined #gluster
03:05 kshlm joined #gluster
03:30 ppai joined #gluster
03:32 nangthang joined #gluster
03:38 virusuy joined #gluster
03:44 kumar joined #gluster
03:46 virusuy joined #gluster
03:46 overclk joined #gluster
03:54 glusterbot News from resolvedglusterbugs: [Bug 1197631] glusterd crashed after peer probe <https://bugzilla.redhat.com/show_bug.cgi?id=1197631>
03:54 virusuy joined #gluster
03:54 virusuy joined #gluster
03:56 harish joined #gluster
03:57 sage joined #gluster
03:58 kanagaraj joined #gluster
04:02 shubhendu joined #gluster
04:02 spandit joined #gluster
04:03 itisravi joined #gluster
04:05 atinmu joined #gluster
04:07 plarsen joined #gluster
04:09 plarsen joined #gluster
04:10 nbalacha joined #gluster
04:11 virusuy joined #gluster
04:11 gildub joined #gluster
04:12 nishanth joined #gluster
04:24 badone_ joined #gluster
04:27 kasturi joined #gluster
04:28 osc_khoj joined #gluster
04:29 soumya joined #gluster
04:32 nishanth joined #gluster
04:37 kshlm joined #gluster
04:40 itisravi joined #gluster
04:45 kshlm joined #gluster
04:47 ppai_ joined #gluster
04:47 schandra joined #gluster
04:48 anoopcs joined #gluster
04:49 meghanam joined #gluster
04:53 kdhananjay joined #gluster
04:53 ndarshan joined #gluster
04:56 sripathi joined #gluster
05:01 siel joined #gluster
05:03 rafi joined #gluster
05:06 vimal joined #gluster
05:07 jiku joined #gluster
05:13 hagarth joined #gluster
05:19 jiffin joined #gluster
05:25 anil joined #gluster
05:26 spandit joined #gluster
05:27 nbalacha joined #gluster
05:29 Intensity joined #gluster
05:33 Manikandan joined #gluster
05:54 glusterbot News from newglusterbugs: [Bug 1207023] [RFE] Snapshot scheduler enhancements (both GUI Console & CLI) <https://bugzilla.redhat.com/show_bug.cgi?id=1207023>
05:54 glusterbot News from newglusterbugs: [Bug 1207028] [Backup]: User must be warned while running the 'glusterfind pre' command twice without running the post command <https://bugzilla.redhat.com/show_bug.cgi?id=1207028>
05:54 glusterbot News from newglusterbugs: [Bug 1207029] BitRot :- If peer in cluster doesn't have brick then its should not start bitd on that node and should not create partial volume file <https://bugzilla.redhat.com/show_bug.cgi?id=1207029>
05:55 krishnan_p joined #gluster
05:55 dusmant joined #gluster
05:56 harish joined #gluster
05:56 gem joined #gluster
06:03 foster joined #gluster
06:06 raghu joined #gluster
06:14 karnan joined #gluster
06:15 vijaykumar joined #gluster
06:19 lalatenduM joined #gluster
06:20 virusuy joined #gluster
06:22 deepakcs joined #gluster
06:33 anrao joined #gluster
06:34 Philambdo joined #gluster
06:35 virusuy joined #gluster
06:35 virusuy joined #gluster
06:45 virusuy joined #gluster
06:45 virusuy joined #gluster
06:45 gem joined #gluster
06:48 ndarshan joined #gluster
06:50 shubhendu joined #gluster
06:51 schandra joined #gluster
06:52 nangthang joined #gluster
06:53 nshaikh joined #gluster
06:59 maveric_amitc_ joined #gluster
07:08 krishnan_p joined #gluster
07:09 anrao joined #gluster
07:10 DV joined #gluster
07:13 soumya joined #gluster
07:14 krishnan_p joined #gluster
07:20 aravindavk joined #gluster
07:24 glusterbot News from newglusterbugs: [Bug 1207054] BitRot :- Object versions is not incremented some times <https://bugzilla.redhat.com/show_bug.cgi?id=1207054>
07:24 glusterbot News from newglusterbugs: [Bug 1194640] Tracker bug for Logging framework expansion. <https://bugzilla.redhat.com/show_bug.cgi?id=1194640>
07:25 deniszh joined #gluster
07:28 kovshenin joined #gluster
07:32 badone__ joined #gluster
07:35 kasturi joined #gluster
07:36 jiku joined #gluster
07:41 shubhendu joined #gluster
07:44 T3 joined #gluster
07:50 fsimonce joined #gluster
07:50 ndarshan joined #gluster
07:51 ghenry joined #gluster
07:51 ghenry joined #gluster
07:54 glusterbot News from newglusterbugs: [Bug 1099460] file locks are not released within an acceptable time when a fuse-client uncleanly disconnects <https://bugzilla.redhat.com/show_bug.cgi?id=1099460>
07:56 liquidat joined #gluster
07:57 ppai_ joined #gluster
08:04 chirino joined #gluster
08:06 jiku joined #gluster
08:28 mbukatov joined #gluster
08:30 rjoseph joined #gluster
08:33 corretico joined #gluster
08:33 Norky joined #gluster
08:39 harish joined #gluster
08:47 ctria joined #gluster
08:48 smohan joined #gluster
08:49 dusmant joined #gluster
08:50 ktosiek joined #gluster
08:54 prilly joined #gluster
08:55 glusterbot News from newglusterbugs: [Bug 1200264] Upcall: Support to handle upcall notifications asynchronously <https://bugzilla.redhat.com/show_bug.cgi?id=1200264>
08:55 glusterbot News from newglusterbugs: [Bug 1200266] Upcall: Support to filter out duplicate upcall notifications received <https://bugzilla.redhat.com/show_bug.cgi?id=1200266>
08:55 glusterbot News from newglusterbugs: [Bug 1200267] Upcall: Cleanup the expired upcall entries <https://bugzilla.redhat.com/show_bug.cgi?id=1200267>
08:55 T3 joined #gluster
09:01 lalatenduM joined #gluster
09:03 Debloper joined #gluster
09:03 DV_ joined #gluster
09:04 Dw_Sn joined #gluster
09:07 ppai_ joined #gluster
09:13 foster joined #gluster
09:25 glusterbot News from newglusterbugs: [Bug 1207115] geo-rep: add debug logs to master for slave ENTRY operation failures <https://bugzilla.redhat.com/show_bug.cgi?id=1207115>
09:30 foster joined #gluster
09:32 prilly joined #gluster
09:40 kasturi joined #gluster
09:43 Slashman joined #gluster
09:47 kshlm joined #gluster
09:50 rjoseph joined #gluster
09:52 hagarth joined #gluster
09:57 hchiramm joined #gluster
10:00 meghanam joined #gluster
10:00 bala joined #gluster
10:05 Marqin_ joined #gluster
10:06 ira joined #gluster
10:07 aravindavk joined #gluster
10:10 shubhendu joined #gluster
10:11 kotreshhr joined #gluster
10:11 itisravi joined #gluster
10:12 lifeofguenter joined #gluster
10:14 Manikandan joined #gluster
10:21 nishanth joined #gluster
10:23 thangnn_ joined #gluster
10:25 glusterbot News from newglusterbugs: [Bug 1207134] BitRot :- bitd is not signing Objects if more than 3 bricks are present on same node <https://bugzilla.redhat.com/show_bug.cgi?id=1207134>
10:25 glusterbot News from newglusterbugs: [Bug 1207146] BitRot:- bitd crashed <https://bugzilla.redhat.com/show_bug.cgi?id=1207146>
10:33 lalatenduM joined #gluster
10:41 Manikandan joined #gluster
10:41 ppai_ joined #gluster
10:44 T3 joined #gluster
10:47 Debloper joined #gluster
10:51 starkers left #gluster
10:53 aravindavk joined #gluster
10:55 glusterbot News from newglusterbugs: [Bug 1207152] BitRot :- bit-rot.signature and bit-rot.version xattr should not be set if  bitrot is not enabled on volume <https://bugzilla.redhat.com/show_bug.cgi?id=1207152>
10:55 glusterbot News from newglusterbugs: [Bug 1065626] Rebalance stop on a distributed-replicated volume shows wrong message on cli <https://bugzilla.redhat.com/show_bug.cgi?id=1065626>
11:06 firemanxbr joined #gluster
11:07 shubhendu joined #gluster
11:07 kotreshhr joined #gluster
11:08 rjoseph joined #gluster
11:09 _PiGreco_ joined #gluster
11:12 kdhananjay joined #gluster
11:13 meghanam joined #gluster
11:14 prilly joined #gluster
11:20 smohan_ joined #gluster
11:20 nishanth joined #gluster
11:25 LebedevRI joined #gluster
11:26 LebedevRI_ joined #gluster
11:27 LebedevRI joined #gluster
11:34 atinmu joined #gluster
11:38 DV__ joined #gluster
11:44 soumya joined #gluster
11:47 kdhananjay1 joined #gluster
11:48 schandra joined #gluster
11:58 haomaiwang joined #gluster
12:04 rjoseph joined #gluster
12:11 Gill joined #gluster
12:21 Ara4Sh joined #gluster
12:24 Gill left #gluster
12:24 poornimag joined #gluster
12:25 anoopcs joined #gluster
12:29 kotreshhr left #gluster
12:30 T3 joined #gluster
12:31 rjoseph joined #gluster
12:36 Norky joined #gluster
12:47 plarsen joined #gluster
12:49 DV__ joined #gluster
12:51 wkf joined #gluster
12:56 glusterbot News from newglusterbugs: [Bug 1207204] Data Tiering <https://bugzilla.redhat.com/show_bug.cgi?id=1207204>
12:56 glusterbot News from newglusterbugs: [Bug 1207215] Data Tiering:Remove brick on a tier volume fails <https://bugzilla.redhat.com/show_bug.cgi?id=1207215>
12:59 julim joined #gluster
13:02 Pupeno joined #gluster
13:02 Pupeno joined #gluster
13:04 Ara4Sh joined #gluster
13:12 vipulnayyar joined #gluster
13:12 julim joined #gluster
13:18 B21956 joined #gluster
13:26 glusterbot News from newglusterbugs: [Bug 1207227] Data Tiering:remove cold/hot brick seems to be behaving like or emulating detach-tier <https://bugzilla.redhat.com/show_bug.cgi?id=1207227>
13:27 firemanxbr joined #gluster
13:32 georgeh-LT2 joined #gluster
13:32 overclk joined #gluster
13:33 hamiller joined #gluster
13:34 kshlm joined #gluster
13:36 _PiGreco_ hello guys, question: I have 2 old bricks, with data on them, but no .glusterfs/
13:36 _PiGreco_ I try to create a volume with them, it succeeds, but then I am unable to read the files in the gluster mounted directory
13:36 _PiGreco_ any suggestion ?
13:39 deepakcs joined #gluster
13:45 T3 joined #gluster
13:49 firemanxbr joined #gluster
13:49 dgandhi joined #gluster
13:49 dgandhi joined #gluster
13:51 dgandhi joined #gluster
13:51 Gill_ joined #gluster
13:54 gnudna joined #gluster
13:54 gnudna joined #gluster
13:54 gnudna left #gluster
13:54 sklav joined #gluster
13:55 gnudna joined #gluster
13:56 Gill joined #gluster
13:56 glusterbot News from newglusterbugs: [Bug 1207238] data tiering:force Remove brick is detaching-tier <https://bugzilla.redhat.com/show_bug.cgi?id=1207238>
13:56 glusterbot News from newglusterbugs: [Bug 1198963] set errno if gf_strdup() failed <https://bugzilla.redhat.com/show_bug.cgi?id=1198963>
14:01 bennyturns joined #gluster
14:01 haomaiwa_ joined #gluster
14:09 vipulnayyar joined #gluster
14:21 T0aD joined #gluster
14:25 haomaiwang joined #gluster
14:27 hagarth joined #gluster
14:32 chirino joined #gluster
14:37 RayTrace_ joined #gluster
14:39 bennyturns joined #gluster
14:48 kdhananjay joined #gluster
15:00 vipulnayyar joined #gluster
15:03 nbalacha joined #gluster
15:06 Leildin _PiGreco_, you need to rebuild the metadata of your files for the volume to publish them
15:06 bennyturns joined #gluster
15:06 anrao joined #gluster
15:07 _PiGreco_ Leildin: I think the real problem is I didn't know I had to clear all the xattrs recursively
15:07 _PiGreco_ and it's taking AGES, it's ~1.3TB and god knows how many directories and files
15:08 Leildin I used I didn't even know about the xattrs my first time juggling bricks. Imagine my puzzlement (I have a small brain)
15:08 _PiGreco_ I am seriously thinking about reformat/reimport from scratch, which takes a week anyway
15:08 _PiGreco_ I hate so big filesystems
15:09 Leildin well, I tried  sudo setfattr -x trusted.glusterfs.volume-id [brick_path]
15:09 Leildin followed by sudo setfattr -x trusted.gfid [brick_path]
15:09 _PiGreco_ yes, that allowed me to re-use the brick
15:09 jmarley joined #gluster
15:09 Leildin and then sudo rm -rf brick_path/.glusterfs
15:10 _PiGreco_ but the whole subtree has the gfid stored in xattrs
15:10 nbalacha joined #gluster
15:10 Leildin it reconstructed the metadata after that
15:10 _PiGreco_ mm it didn't in my case
15:10 wushudoin joined #gluster
15:10 Leildin and it was all fine, but that might be completly wrong I'm navigating by sight most of the time with glusterfs
15:11 haomaiwang joined #gluster
15:11 corretico joined #gluster
15:11 _PiGreco_ me too.. well thanks for sharing your experience, I'll do the same when I'm done .. in a week or so (damn huge FS)
15:11 Leildin We also found (vol=gv0; brick=/data/sdb1/brick1; sudo setfattr -n  trusted.glusterfs.volume-id -v 0x$(sudo grep volume-id /var/lib/glusterd/vols/$vol/info | sudo cut -d= -f2 | sudo sed 's/-//g') $brick)
15:12 Leildin It's supposed to reconstruct xattr "fast"
15:12 Leildin not sure we have the same definition of fast
15:12 xiu .b 13
15:12 _PiGreco_ lol
15:13 Leildin That's all I can tell you about reconstructing metadata
15:13 roost joined #gluster
15:13 Leildin I only use distributed volumes too
15:13 Leildin no replicas
15:14 _PiGreco_ me too for now, but adding replica is easy doesn't scare me
15:14 Leildin (I have physical replication)
15:14 _PiGreco_ it's the initial import which is terrible due to the huge fragmentation, biggest file is like 50KB here
15:14 m0ellemeister joined #gluster
15:15 shubhendu joined #gluster
15:16 Leildin ah yes, gluster isn't very fond of small stuff
15:17 Leildin I'm coopying 1.4 million files at the moment, half are small images
15:17 Leildin been at it for days
15:17 Leildin good thing it's not critical !
15:18 roost we have like 20 million images and videos lol
15:18 _PiGreco_ neither is my thing here, but I'd like to complete it before I die of old age
15:21 Leildin I'll setup an SSD cluster next to see the difference in speed, 7.2K is killing me :(
15:21 Leildin good luck roost if you ever have to copy that stuff :p
15:21 bala joined #gluster
15:23 coredump joined #gluster
15:26 glusterbot News from resolvedglusterbugs: [Bug 1206553] Data Tiering:Need to allow detaching of cold tier too <https://bugzilla.redhat.com/show_bug.cgi?id=1206553>
15:31 roost Leildin, :(
15:32 overclk joined #gluster
15:33 Leildin don't worry roost, they'll have found how to copy small files at lightning speed by then, I have faith !
15:42 vipulnayyar joined #gluster
15:48 jmarley joined #gluster
15:48 jmarley joined #gluster
16:02 chirino joined #gluster
16:06 overclk joined #gluster
16:09 anrao joined #gluster
16:10 billputer joined #gluster
16:11 nangthang joined #gluster
16:11 billputer Does anyone have an alternative Ubuntu repo for 3.3 now that the semiosis PPA has been removed?
16:11 JoeJulian @ppa
16:11 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
16:13 billputer JoeJulian: thanks, but that doesn't cover 3.3, which we're still using on an older cluster
16:16 JoeJulian Ah, he still had 3.3 stuff up there? Why would you need a ppa though? You've already got it installed.
16:17 JoeJulian You would't be getting updates.
16:21 billputer we need to add a few new machines to the cluster
16:22 JoeJulian Doesn't that make it a newer cluster? ;)
16:23 JoeJulian semiosis probably has the build files laying around somewhere. If he does it's probably not a big deal to re-add that to the gluster ppa.
16:24 billputer that would be nice, I'd obviously like to upgrade the gluster version on that cluster, but I've got other fires to fight before getting that done, and the package disappearing from the PPA was a bit of a pain
16:26 glusterbot News from newglusterbugs: [Bug 1203739] Self-heal of sparse image files on 3-way replica "unsparsifies" the image <https://bugzilla.redhat.com/show_bug.cgi?id=1203739>
16:34 chirino joined #gluster
16:37 Gill joined #gluster
16:43 vipulnayyar joined #gluster
16:50 penglish1 joined #gluster
16:54 anrao joined #gluster
16:59 bene2 joined #gluster
17:02 T3 joined #gluster
17:03 ildefonso joined #gluster
17:31 _Bryan_ joined #gluster
17:33 julim joined #gluster
17:33 hamiller joined #gluster
17:39 vipulnayyar joined #gluster
17:48 Rapture joined #gluster
18:08 jermudgeon joined #gluster
18:15 jobewan joined #gluster
18:16 vipulnayyar joined #gluster
18:19 lalatenduM joined #gluster
18:27 glusterbot News from newglusterbugs: [Bug 1165938] Fix regression test spurious failures <https://bugzilla.redhat.com/show_bug.cgi?id=1165938>
18:27 glusterbot News from newglusterbugs: [Bug 1207343] SQL query failed during tiering rebalancer and write/read frequency thresolds not work <https://bugzilla.redhat.com/show_bug.cgi?id=1207343>
18:28 dbruhn joined #gluster
18:29 Gill joined #gluster
18:37 Gill joined #gluster
18:46 deniszh joined #gluster
18:48 deniszh joined #gluster
18:52 gnudna is built in nfs in gluster more efficient that using a glusterfs mount?
18:52 gnudna im looking to see what improvements i could possibly do for my replicated setup
18:53 gnudna currently running kvm images
18:53 deniszh joined #gluster
18:54 gnudna glusterfs seems slower than when i was using nfs but this is a test env at home aka a PoC
18:54 ndevos gnudna: for KVM, you really want to look into qemu+libgfapi, address the images through a gluster://host/dir/vm-img.qcow2 url
18:55 gnudna is there a true benefit?
18:56 gnudna i can just change my mount point
18:56 ndevos yes, you do not use a filesystem layer at all anymore
18:56 gnudna in my cae i have /var/lib/libvirt/images mount through glusterfs
18:56 gnudna cae = case
18:56 deniszh joined #gluster
18:57 ndevos yeah, thats hitting a lot of context switches
18:57 gnudna example of df flux.sklav:/kvm (glusterfs mount) /var/lib/libvirt/images
18:57 ndevos http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/ might have some performance details
18:57 gnudna or do i need to do this using the gluster mount option in virt-manager
18:58 gnudna i did the change originaly in the xml
18:58 gnudna but did not notice an improvement
18:58 pkoro joined #gluster
19:00 ndevos gnudna: maybe http://blog.nixpanic.net/2013/11/initial-work-on-gluster-integration.html helps too?
19:00 gnudna thanks ndevos
19:01 Gill_ joined #gluster
19:01 ndevos gnudna: the idea is to have qemu access the gluster servers directly, you can create images over a mountpoint, but the qemu commandline should contain the gluster:// url
19:01 social joined #gluster
19:02 gnudna ok did not see the second article
19:04 Gill joined #gluster
19:09 osc_khoj joined #gluster
19:09 Gill_ joined #gluster
19:10 JoeJulian There's also the chance that your performance tests are flawed, or maybe you're measuring throughput for replicated writes and comparing them with unreplicated nfs.
19:11 JoeJulian What kind of performance number are you looking for, and did you engineer to meet that expectation?
19:12 gnudna JoeJulian in general none of my stats are accurate per say
19:12 gnudna i did for example a dd test
19:12 gnudna on the glusterfs mounted volume
19:12 gnudna vs the nfs one
19:13 JoeJulian Did you exceed the memory cache?
19:13 gnudna in general i notice a slight delay when logging into the vm vs when it was on nfs
19:13 gnudna unlikely
19:13 gnudna then again i do not know what the default is
19:13 JoeJulian Then you're not doing a valid comparison.
19:13 gnudna i never modified it to be honest past the default
19:14 JoeJulian nfs uses FSCache in the kernel.
19:14 gnudna how can i see what values are there by default?
19:14 JoeJulian So there's no guarantee that you're in-sync between clients.
19:14 JoeJulian It can use all of free memory.
19:15 gnudna ok then
19:15 JoeJulian Any performance throughput test should always exceed total ram.
19:16 gnudna so in theory dd should be set like 10G in order to properly test
19:16 gnudna i have 8G on each gluster/kvm server
19:17 JoeJulian Also, make sure your block sizes (with dd) fill up your MTU. A default 512 byte dd block will waste a lot of packet capability for every TCP header.
19:17 ndevos JoeJulian: FSCache is normally not enabled, you need to run cachefilesd and mount the nfs export with the fsc mount option
19:17 gnudna my mtu is set to 9000
19:17 gnudna on the servers since my switch and servers nics support jumbo frames
19:17 JoeJulian ndevos: ok, I guess, but I haven't done anything and I can see actions *not* hitting the network on nfs.
19:19 JoeJulian gnudna: Yes, but if you "dd if=something of=/glustermount/something" you'll waste 8.4k of that capability because each block will only have 512 bytes.
19:19 ndevos JoeJulian: oh, yes, the NFS-client has a more advanced caching than fuse does (nfs targets networks, fuse is generic)
19:19 gnudna i see
19:19 JoeJulian I usually do "bs=4M" for dd.
19:22 gnudna ok so here are the results
19:22 gnudna 838860800 bytes (839 MB) copied, 17.1469 s, 48.9 MB/s
19:22 gnudna 41943040 bytes (42 MB) copied, 0.416403 s, 101 MB/s
19:22 JoeJulian Oh, ndevos, maybe I have the wrong one. One's FSCache, the other's CacheFS. One of those use cachefilesd, the other doesn't.
19:22 gnudna 419430400 bytes (419 MB) copied, 4.04904 s, 104 MB/s
19:23 gnudna 42M and 419M are similar in speed
19:23 gnudna 839M is almost half the performance
19:24 ndevos JoeJulian: FS-Cache and Cache-FS are related, cachefilesd connects both iirc
19:24 JoeJulian gnudna: replica 2?
19:25 gnudna yes
19:25 JoeJulian ding, ding, ding!
19:25 JoeJulian So you're writing to 2 servers with replica 2.
19:25 JoeJulian ergo, half the throughput.
19:25 ndevos JoeJulian: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/caching - cachedules.txt and fscache.txt
19:26 gnudna i am obviously missing something
19:26 gnudna so in theory if i add a 3rd server this should improve?
19:27 JoeJulian If you add a 3rd server and do replica 3, your write bandwidth would be effectively cut into 3 because your client would then be writing to three servers.
19:27 ndevos JoeJulian: fscache.txt has a nice ascii diagram, it shows the relation beteen the FSCache components
19:28 gnudna i guess i am looking at this wrong
19:29 gnudna so when i add 3rd server can i keep replica to 2
19:29 JoeJulian @brick order
19:29 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
19:29 JoeJulian But that's not going to change your write speeds.
19:29 penglish2 joined #gluster
19:30 JoeJulian If you're pushing 84Mb across a network, it's going to take twice as long as 42Mb. 2 replicas means you're writing to two targets.
19:31 gnudna fair enough
19:31 gnudna i would need distributed to see any real benefit in writes
19:32 gnudna in my case i set this up for redundancy
19:32 gnudna and i can say it works
19:32 JoeJulian cool
19:33 gnudna i was just wondering what common practice for improvements i could add
19:33 gnudna aka squeeze what little performance i could out of my old hardware ;)
19:33 gnudna then again each server has a raid 1 setup
19:34 gnudna so in theory this is overkill
19:34 Pupeno_ joined #gluster
19:34 gnudna i could have went raid0 localy and then replicated across to 2nd node
19:36 gnudna can't see how raid1 locally with distributed could be beneficial since i would loose the cluster on any downtime
19:38 JoeJulian I agree
19:38 gnudna maybe i will reverse the setup
19:39 JoeJulian raid0, at least, will allow your rust to keep up with your network.
19:39 gnudna raid 0 locally and distributed gluster
19:39 gnudna yeah that is what i will be doing this weekend
19:39 gnudna hehe ;)
19:39 JoeJulian raid 0 and replica 2, imho.
19:39 gnudna exactly
19:41 JoeJulian If you want wire speed writes, look in to new-style replication. I think it was supposed to make it into 3.6, iirc.
19:43 gnudna ok will look into it
19:43 gnudna i am already running 3.6.x
19:43 gnudna so will look for the option
19:47 hchiramm_ joined #gluster
19:49 lalatenduM joined #gluster
19:51 pkoro joined #gluster
20:01 penglish1 joined #gluster
20:35 o5k joined #gluster
20:44 gnudna left #gluster
20:45 dgandhi joined #gluster
20:45 badone_ joined #gluster
20:54 rotbeard joined #gluster
20:57 anrao joined #gluster
21:10 R0ok_ joined #gluster
21:13 mbukatov joined #gluster
21:41 vijaykumar joined #gluster
21:47 o5k joined #gluster
21:47 Pupeno joined #gluster
21:49 wkf joined #gluster
21:50 julim joined #gluster
22:04 R0ok_ joined #gluster
22:06 corretico joined #gluster
22:29 vijaykumar joined #gluster
22:33 T3 joined #gluster
23:02 o5k joined #gluster
23:05 T3 joined #gluster
23:11 penglish2 joined #gluster
23:25 osc_khoj joined #gluster
23:30 Alpinist joined #gluster
23:40 penglish1 joined #gluster
23:47 chirino joined #gluster
23:58 Pupeno_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary