Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 craigcabrey joined #gluster
00:16 tessier JoeJulian: Per your suggestion many weeks ago, and having finally started my xen/iscsi/md -> kvm/gluster migration in earnest, I tried using the libgfapi interface to the VM images instead of raw files. It would boot and mount ok but then almost immediately run into IO errors and the filesystem would go read only. You ever seen that happen before?
00:20 calavera joined #gluster
00:33 topshare joined #gluster
00:35 gildub joined #gluster
00:36 mahendra_ joined #gluster
00:45 glusterbot News from newglusterbugs: [Bug 1238446] glfs_stat returns bad device ID <https://bugzilla.redhat.co​m/show_bug.cgi?id=1238446>
00:52 MugginsM joined #gluster
01:04 MugginsM joined #gluster
01:12 mahendra__ joined #gluster
01:31 arthurh is cluster 3.7 considered stable / to be used in production?
01:31 arthurh gluster, even.
01:40 dgbaley arthurh: I thought it was: It's at the second point release and is pointed to by LATEST which I took to mean latest stable.
01:55 craigcabrey joined #gluster
02:02 overclk joined #gluster
02:03 harish joined #gluster
02:03 arthurh dgbaley, Thanks, just making sure, as the gluster.org site mentions latest as 3.7, but stability is isn't explicit like the other links.
02:04 PatNarcisoZzZ joined #gluster
02:10 topshare_ joined #gluster
02:11 Tarik joined #gluster
02:12 Tarik Hey guys how are you? I am currently trying to setup gflusterfs with 2 bricks however is there a way to ignore replication of specific extentions?
02:13 Tarik (file extentions)
02:17 MugginsM joined #gluster
02:17 nangthang joined #gluster
02:19 Tarik I hope someone can help :(
02:20 davidbitton joined #gluster
02:35 dgbaley Tarik: Not that I know of, that sort of thing is covered in this: http://www.gluster.org/community/documentat​ion/index.php/Features/data-classification which I believe is planned for 4.0
02:35 mribeirodantas joined #gluster
02:37 overclk dgbaley, I guess that's related to classifying data based on hot/cold, might not be for file exts.
02:38 dgbaley caching is what's seen the most work there, but the doc talks about all sorts of classification such as regulatory, placement, erasure vs replicating, ...
02:39 overclk to some extent, yes.
02:41 gildub joined #gluster
02:45 glusterbot News from newglusterbugs: [Bug 1242708] fuse/fuse_thread_proc : The fuse_graph_sync function cannot be handled  in time after we fix-layout. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1242708>
02:55 craigcabrey joined #gluster
02:57 bharata-rao joined #gluster
02:57 PatNarcisoZzZ joined #gluster
02:59 victori joined #gluster
03:04 vmallika joined #gluster
03:12 jcastill1 joined #gluster
03:15 kshlm joined #gluster
03:15 glusterbot News from newglusterbugs: [Bug 1242718] [RFE] Improve I/O latency during signing <https://bugzilla.redhat.co​m/show_bug.cgi?id=1242718>
03:18 Tarik joined #gluster
03:19 meghanam joined #gluster
03:23 TheSeven joined #gluster
03:29 jcastillo joined #gluster
03:40 atalur joined #gluster
03:46 atinm joined #gluster
03:58 calavera joined #gluster
04:00 plarsen joined #gluster
04:00 shubhendu joined #gluster
04:03 saurabh joined #gluster
04:11 RameshN joined #gluster
04:12 itisravi joined #gluster
04:15 smohan joined #gluster
04:19 kanagaraj joined #gluster
04:22 MugginsM joined #gluster
04:26 victori joined #gluster
04:29 PatNarcisoZzZ joined #gluster
04:30 ppai joined #gluster
04:31 arcolife joined #gluster
04:31 MugginsM joined #gluster
04:33 gem joined #gluster
04:34 rafi joined #gluster
04:40 yazhini joined #gluster
04:42 nbalacha joined #gluster
04:49 RameshN joined #gluster
04:50 ramteid joined #gluster
04:50 rjoseph joined #gluster
04:52 meghanam joined #gluster
04:53 nbalacha joined #gluster
05:01 jcastill1 joined #gluster
05:05 natarej_ joined #gluster
05:05 pppp joined #gluster
05:06 sakshi joined #gluster
05:06 jcastillo joined #gluster
05:09 kshlm joined #gluster
05:12 vmallika joined #gluster
05:15 spandit joined #gluster
05:16 soumya joined #gluster
05:16 glusterbot News from newglusterbugs: [Bug 1242734] GlusterD crashes when management encryption is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1242734>
05:16 DV joined #gluster
05:24 gem_ joined #gluster
05:25 dusmant joined #gluster
05:25 ghenry joined #gluster
05:25 hgowtham joined #gluster
05:26 uebera|| joined #gluster
05:27 Lee1092 joined #gluster
05:30 Manikandan joined #gluster
05:30 Saravana_ joined #gluster
05:30 kdhananjay joined #gluster
05:35 anil joined #gluster
05:43 ashiq joined #gluster
05:44 Bhaskarakiran joined #gluster
05:45 craigcabrey joined #gluster
05:46 jiffin joined #gluster
05:46 deepakcs joined #gluster
05:53 craigcabrey joined #gluster
05:53 sripathi joined #gluster
05:55 maveric_amitc_ joined #gluster
06:02 smohan joined #gluster
06:05 anmol joined #gluster
06:07 raghu joined #gluster
06:11 aravindavk joined #gluster
06:13 vimal joined #gluster
06:16 sadbox joined #gluster
06:17 Pupeno joined #gluster
06:20 jtux joined #gluster
06:21 meghanam joined #gluster
06:24 anmol joined #gluster
06:25 atalur joined #gluster
06:29 hchiramm joined #gluster
06:31 smohan joined #gluster
06:52 sadbox joined #gluster
06:54 RameshN_ joined #gluster
06:54 _fortis joined #gluster
06:58 Philambdo joined #gluster
07:05 meghanam joined #gluster
07:05 hagarth joined #gluster
07:05 [Enrico] joined #gluster
07:06 PatNarcisoZzZ joined #gluster
07:22 _fortis joined #gluster
07:42 fsimonce joined #gluster
07:43 Trefex joined #gluster
07:45 ctria joined #gluster
08:07 itisravi joined #gluster
08:10 elico joined #gluster
08:20 harish joined #gluster
08:28 smohan joined #gluster
08:39 meghanam joined #gluster
08:47 glusterbot News from newglusterbugs: [Bug 1242809] Performance: Impact of Bitrot on I/O Performance <https://bugzilla.redhat.co​m/show_bug.cgi?id=1242809>
09:08 topshare joined #gluster
09:17 ajames-41678 joined #gluster
09:19 nishanth joined #gluster
09:23 billyBob joined #gluster
09:27 billyBob Hello, I am trying to replace a crashed server but i can't get the new one to replace the old one. I have followed this: http://www.gluster.org/community/docum​entation/index.php/Gluster_3.4:_Brick_​Restoration_-_Replace_Crashed_Server . My main problem (I think) is that the replacement server is "State: Sent and Received peer request (Connected)" in
09:27 billyBob peer status
09:27 billyBob I can't get it to be "State: Peer in Cluster (Connected)"
09:28 billyBob any ideas?
09:28 gem_ joined #gluster
09:31 autoditac joined #gluster
09:31 chirino_m joined #gluster
09:33 aravindavk joined #gluster
09:37 shubhendu joined #gluster
09:39 nishanth joined #gluster
09:45 anmol joined #gluster
09:47 glusterbot News from resolvedglusterbugs: [Bug 1057292] option rpc-auth-allow-insecure should default to "on" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1057292>
09:48 Manikandan joined #gluster
09:59 ira joined #gluster
10:10 kkeithley1 joined #gluster
10:11 aravindavk joined #gluster
10:17 glusterbot News from newglusterbugs: [Bug 1242875] Quota: Quota Daemon doesn't start after node reboot <https://bugzilla.redhat.co​m/show_bug.cgi?id=1242875>
10:26 overclk joined #gluster
10:32 shubhendu joined #gluster
10:33 anmol joined #gluster
10:33 soumya_ joined #gluster
10:34 LebedevRI joined #gluster
10:37 kovshenin joined #gluster
10:38 nsoffer joined #gluster
10:42 bene2 joined #gluster
10:47 glusterbot News from newglusterbugs: [Bug 1242882] Quota: Quota Daemon doesn't start after node reboot <https://bugzilla.redhat.co​m/show_bug.cgi?id=1242882>
10:48 atinm joined #gluster
10:48 maveric_amitc_ joined #gluster
10:52 dusmant joined #gluster
10:55 vmallika joined #gluster
10:57 Peppard joined #gluster
10:59 Saravana_ joined #gluster
11:02 RameshN joined #gluster
11:06 PatNarcisoZzZ joined #gluster
11:09 jcastill1 joined #gluster
11:14 jcastillo joined #gluster
11:15 vmallika joined #gluster
11:17 kdhananjay joined #gluster
11:17 glusterbot News from newglusterbugs: [Bug 1193298] [RFE] 'gluster volume help' output could be sorted alphabetically <https://bugzilla.redhat.co​m/show_bug.cgi?id=1193298>
11:17 glusterbot News from newglusterbugs: [Bug 1242892] SMB: share entry from smb.conf is not removed after setting user.cifs and user.smb to disable. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1242892>
11:27 rafi1 joined #gluster
11:28 dusmant joined #gluster
11:32 jrm16020 joined #gluster
11:35 kdhananjay joined #gluster
11:39 soumya_ joined #gluster
11:42 ajames41678 joined #gluster
11:42 overclk joined #gluster
11:42 atalur joined #gluster
11:48 rafi joined #gluster
11:49 shubhendu joined #gluster
11:50 rafi joined #gluster
11:51 billyBob Hello again, does anybody know how to start the "Self-heal Daemon on localhost"? My cluster has got into a weird state. Still got one server "State: Sent and Received peer request (Connected)" in peer status, I can't get it to be "State: Peer in Cluster (Connected)"... Any help would be more than welcome...
11:54 atinm joined #gluster
11:54 dusmant joined #gluster
11:54 jmarley joined #gluster
11:57 meghanam joined #gluster
11:57 unclemarc joined #gluster
11:59 rafi REMINDER: Gluster Community Bug Triage meeting starting in another 1 minutes in #gluster-meeting
11:59 soumya_ joined #gluster
12:00 julim joined #gluster
12:00 itisravi billyBob: `gluster volume start <volname> force' should restart the shd
12:01 overclk joined #gluster
12:01 billyBob itisravi: ok will try thanks
12:03 topshare joined #gluster
12:04 Saravana_ anoopcs, ndevos ,  I can see the error in Fedora 21 with latest master 2015-07-14 12:03:28.304063] I [cli.c:711:main] 0-cli: Started running gluster with version 3.8dev
12:04 Saravana_ [2015-07-14 12:03:28.352956] E [mem-pool.c:417:mem_get0] (-->/usr/local/lib/libglusterfs.so.0(+0x2b1c5) [0x7f3fd8b2e1c5] -->/usr/local/lib/libgluste​rfs.so.0(log_buf_new+0x33) [0x7f3fd8b2a275] -->/usr/local/lib/libglusterfs.so.0(mem_get0+0x5e) [0x7f3fd8b6d497] ) 0-mem-pool: invalid argument [Invalid argument]
12:04 cleong joined #gluster
12:04 glusterbot Saravana_: ('s karma is now -91
12:04 topshare joined #gluster
12:04 vmallika joined #gluster
12:05 topshare joined #gluster
12:06 topshare joined #gluster
12:08 pdrakeweb joined #gluster
12:09 jtux joined #gluster
12:12 Trefex joined #gluster
12:18 glusterbot News from resolvedglusterbugs: [Bug 1222065] GlusterD fills the logs when the NFS-server is disabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1222065>
12:29 autoditac_ joined #gluster
12:29 topshare joined #gluster
12:31 ajames-41678 joined #gluster
12:40 autoditac_ joined #gluster
12:44 Trefex joined #gluster
12:46 B21956 joined #gluster
12:48 glusterbot News from newglusterbugs: [Bug 1242913] Debian Jessie as KVM guest on GlusterFS backend <https://bugzilla.redhat.co​m/show_bug.cgi?id=1242913>
12:48 glusterbot News from newglusterbugs: [Bug 1241238] setxattr and fsetxattr tests fail with gluster 3.5.4 and 3.7.2 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1241238>
12:48 glusterbot News from newglusterbugs: [Bug 1241341] Multiple DBus signals to export a volume that's already exported crashes NFS-Ganesha <https://bugzilla.redhat.co​m/show_bug.cgi?id=1241341>
12:49 Saravana_ joined #gluster
12:56 hagarth joined #gluster
12:59 jcastill1 joined #gluster
12:59 smohan_ joined #gluster
13:00 shaunm joined #gluster
13:02 Trefex joined #gluster
13:02 wkf joined #gluster
13:03 chirino joined #gluster
13:04 jcastillo joined #gluster
13:05 overclk joined #gluster
13:06 soumya_ joined #gluster
13:08 cleong joined #gluster
13:10 DV joined #gluster
13:12 ekuric joined #gluster
13:13 julim joined #gluster
13:18 glusterbot News from resolvedglusterbugs: [Bug 1065619] gluster peer probe on localhost should fail with appropriate return value <https://bugzilla.redhat.co​m/show_bug.cgi?id=1065619>
13:19 Romeor joined #gluster
13:20 dusmant joined #gluster
13:24 mpietersen joined #gluster
13:24 pppp joined #gluster
13:26 overclk joined #gluster
13:27 georgeh-LT2 joined #gluster
13:28 dgandhi joined #gluster
13:29 nsoffer joined #gluster
13:30 nsoffer joined #gluster
13:34 bennyturns joined #gluster
13:34 hamiller joined #gluster
13:38 RedW joined #gluster
13:43 MilosCuculovic joined #gluster
13:43 MilosCuculovic Hi All
13:43 MilosCuculovic It is my first time to joint the chat community.
13:43 bene2 joined #gluster
13:43 MilosCuculovic And I ahave a question about geo-replication
13:43 MilosCuculovic Am I in the right place?
13:47 hamiller Go ahead Milos, whats your question?
13:47 msvbhat MilosCuculovic: Please ask your question. If someone knows the answer they will reply
13:49 cholcombe joined #gluster
13:55 cyberswat joined #gluster
13:56 chirino_m joined #gluster
14:01 victori joined #gluster
14:02 Manikandan joined #gluster
14:04 squizzi joined #gluster
14:09 shubhendu joined #gluster
14:11 archit joined #gluster
14:11 AdrianH joined #gluster
14:15 MilosCuculovic Hi hamiler, msvbhat, sorry for the late reply
14:15 MilosCuculovic So, I have one master and one slave GlusterFS server
14:15 MilosCuculovic The replication works well
14:16 MilosCuculovic One server is in Switzerland and the 2nd in Hong Kong
14:16 MilosCuculovic My initial ida was to share files between two applications that both needs to read and write in the cluster
14:17 MilosCuculovic Is it possible to do this with geo replication
14:17 MilosCuculovic So also writhe in the slave?
14:17 ashiq joined #gluster
14:20 hamiller MilosCuculovic, The Slave's data may take time to synchronize with the Masters, and should be considered 'READ-ONLY"
14:21 aravindavk joined #gluster
14:26 wushudoin joined #gluster
14:27 AdrianH left #gluster
14:28 BillyBobJoe joined #gluster
14:28 msvbhat MilosCuculovic: Yeah, The current version of geo-replication is unidirectional
14:29 msvbhat MilosCuculovic: It only syncs from master to slave
14:29 ashiq joined #gluster
14:30 smohan joined #gluster
14:38 Twistedgrim joined #gluster
14:40 BillyBobJoe Hello, I am trying to replace a crashed server with a new one. I have 4 peers (distributed and replicated). 'gluster volume info' lists all bricks correctly on all peers. However when I run 'gluster volume status' on the new server it lists all the bricks as been online, but says that the 'self healing deamon' isn't running on that new replacement
14:40 BillyBobJoe peer. When I run 'gluster volume status' on the other 3 peers it doesn't list the bricks from the replacement peer. 'gluster peer status' on the other 3 list the new server as 'State: Sent and Received peer request (Connected)'. I've mounted the volume, created some text files and I can see them in the bricks of the replacement peer. I've basically
14:40 BillyBobJoe lost trust in my setup. Any ideas? (thanks for reading)
14:40 overclk joined #gluster
14:43 kbyrne joined #gluster
14:43 nbalacha joined #gluster
14:43 BillyBobJoe I've tried 'peer probe...', restarting Gluster on all peers, stoping and starting the volume...
14:43 MilosCuculovic msvbhat: Any idea if this can be acheived somehow?
14:46 theron joined #gluster
14:51 mckaymatt joined #gluster
14:51 jdossey joined #gluster
14:57 mpietersen joined #gluster
14:58 theron joined #gluster
14:58 msvbhat MilosCuculovic: I'm not sure. But I see that you have sent a mail to gluster-users, so someone might have some *cool* hack for the workaround
14:58 msvbhat MilosCuculovic: Something that is working may be :)
15:00 shyam joined #gluster
15:02 chirino joined #gluster
15:03 mckaymatt joined #gluster
15:05 plarsen joined #gluster
15:07 ashiq joined #gluster
15:11 woakes070048 joined #gluster
15:14 jmarley joined #gluster
15:17 jobewan joined #gluster
15:18 jcastill1 joined #gluster
15:21 BillyBobJoe How can I find information about why a peer is  'State: Sent and Received peer request (Connected)'?
15:21 mckaymatt joined #gluster
15:23 jcastillo joined #gluster
15:27 topshare joined #gluster
15:28 topshare joined #gluster
15:33 kshlm joined #gluster
15:33 kshlm joined #gluster
15:35 nsoffer joined #gluster
15:35 mckaymatt joined #gluster
15:45 B21956 joined #gluster
15:48 glusterbot News from newglusterbugs: [Bug 1243041] SMB: share entry from smb.conf is not removed after setting user.cifs and user.smb to disable. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1243041>
15:51 soumya joined #gluster
15:53 nbalacha joined #gluster
15:55 rafi joined #gluster
15:57 jblack joined #gluster
15:59 rwheeler joined #gluster
16:03 LebedevRI joined #gluster
16:03 DV_ joined #gluster
16:14 kaushal_ joined #gluster
16:15 dgbaley joined #gluster
16:19 victori_ joined #gluster
16:23 craigcabrey joined #gluster
16:26 Trefex joined #gluster
16:28 shyam joined #gluster
16:31 meghanam joined #gluster
16:32 victori joined #gluster
16:32 cholcombe joined #gluster
16:35 calavera joined #gluster
16:41 shyam joined #gluster
16:42 bene2 tdasilva, ping
16:42 glusterbot bene2: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
16:50 mckaymatt joined #gluster
16:53 Rapture joined #gluster
16:54 nishanth joined #gluster
16:56 calavera joined #gluster
16:56 mpietersen joined #gluster
16:59 calavera_ joined #gluster
17:03 Trefex joined #gluster
17:06 mpietersen joined #gluster
17:08 mckaymatt joined #gluster
17:15 natarej_ anyone got experience using tiering?
17:17 shyam joined #gluster
17:17 jcastill1 joined #gluster
17:22 jcastillo joined #gluster
17:45 mckaymatt joined #gluster
17:50 mckaymatt joined #gluster
17:51 calavera joined #gluster
17:51 tessier 14.9MB/s when writing to gluster from inside this VM. Wish I knew why it was so slow...
18:03 anil joined #gluster
18:03 dgandhi joined #gluster
18:05 bene3 joined #gluster
18:10 DV_ joined #gluster
18:14 mckaymatt joined #gluster
18:16 JoeJulian tessier: Guesses include qcow2 and synchronized writes.
18:16 shyam joined #gluster
18:20 cyberswat joined #gluster
18:21 tertiary joined #gluster
18:21 JoeJulian @splitbrain
18:21 glusterbot JoeJulian: I do not know about 'splitbrain', but I do know about these similar topics: 'split brain', 'split-brain'
18:21 JoeJulian @split brain
18:21 glusterbot JoeJulian: To heal split-brain, use splitmount. http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/
18:22 rafi1 joined #gluster
18:25 JoeJulian @forget 'split brain'
18:25 glusterbot JoeJulian: Error: There is no such factoid.
18:25 JoeJulian @forget "split brain"
18:25 glusterbot JoeJulian: The operation succeeded.
18:25 tertiary Can anyone point me to a doc on changing a node hostname?
18:25 JoeJulian @alias "split-brain" "split brain"
18:25 glusterbot JoeJulian: The operation succeeded.
18:25 JoeJulian @alias "split-brain" "splitbrain"
18:25 glusterbot JoeJulian: The operation succeeded.
18:25 Asmadeus_ joined #gluster
18:25 jblack tertiary: I've seen references to such a doc, but I haven't been able to track it down yet.
18:26 tertiary crap
18:26 PatNarcisoZzZ joined #gluster
18:26 msvbhat_ joined #gluster
18:27 JoeJulian tertiary: There's no way to do that if a volume uses that hostname. The solution I would use would be to down the volume, stop all gluster services on the servers, then sed replace the hostname on files under /var/lib/glusterd.
18:29 atrius_ joined #gluster
18:29 Adifex joined #gluster
18:30 jotun_ joined #gluster
18:30 wushudoin| joined #gluster
18:30 [7] joined #gluster
18:30 mjrosenb_ joined #gluster
18:30 cholcombe joined #gluster
18:36 wushudoin| joined #gluster
18:44 jblack when one performeds a full heal on a pre-populated filesystem, are all of the files on a glusterclient mount there as necessary?
18:45 mckaymatt joined #gluster
18:45 jblack pardon, I worded that terribly.
18:46 jblack When one starts up a replicated volume across two servers, with a prepopulated filesystem on system A, and a full heal, will a glusterclient mount to both fileservers be able to serve all files during the syncronization?
18:47 jblack with the understanding, of course, that the operation may block as the servers play hot-potato with requests
18:48 jblack also, would there be any speed improvements to healing if I provide both servers with an initial copy of the data that they need to serve?
18:49 mckaymatt joined #gluster
18:49 autoditac joined #gluster
18:50 jbautista- joined #gluster
18:54 shyam jblack: In case the question does not get answered here, post this to gluster-users mailing list, and add pkarampu@redhat.com he maintains AFR and could possibly answer the question
18:56 jbautista- joined #gluster
18:57 jblack Ok.  I'll see if I can't chase down whoever maintains the faq and get it added there.
18:58 jblack Trying the mailing list is a good idea if I draw a blank here.
19:01 RedW joined #gluster
19:04 RedW joined #gluster
19:04 mpietersen joined #gluster
19:06 mpietersen joined #gluster
19:08 mpietersen joined #gluster
19:09 mpietersen joined #gluster
19:11 mpietersen joined #gluster
19:16 jblack joined #gluster
19:19 glusterbot News from newglusterbugs: [Bug 1243108] bash tab completion fails with "grep: Invalid range end" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1243108>
19:26 Romeor any1 is running proxmox with glusterfs as storage backend for VMs?
19:31 cleong joined #gluster
19:32 natarej_ Romeor, waiting on my scrap bits to arrive and i'm building a lab to test just that
19:33 natarej_ is anyone here using the disperse feature?
19:34 Romeor natarej_: fine. when you expect to get ur hw?
19:35 natarej_ i'm waiting on the caddys and chassis
19:35 natarej_ supposed to arrive 14th-16th
19:36 Romeor ok. mail me pls, when you'll get them
19:36 Romeor romeo.r@gmail.com
19:37 Romeor i'm just having troubles with D8 installation only using this setup and this makes me mad
19:37 Romeor I AM SO MAD THAT I USE CAPS! really..  :)
19:38 natarej_ you tried other distros?
19:38 Romeor every other distro installs and runs great. d8 install is always interrupted on random step. but d8 installs on local disks just fine
19:39 natarej_ thats so strange
19:39 Romeor and i've got 4 proxmox nodes and there is same behavior on every of them
19:40 Romeor and yes i've done md5checksum
19:40 natarej_ i was wondering if i should have asked
19:40 natarej_ lol
19:41 Romeor noone can help me for 1,5 months already. proxmox guys say its glusterfs fault and don't even bother to create a lab with such setup. gluster guys say the have no exp with proxmox and will try to setup one sometime
19:41 Romeor other guys on mailing list who use ovirt say they run hundreds of d8 on gluster
19:42 natarej_ you could try doing a single gluster node
19:42 natarej_ an independant one
19:42 Romeor pls install gluster 3.6.4 on ur lab
19:43 Philambdo joined #gluster
19:43 natarej_ you could also try running a single node on 3.72
19:43 Romeor well, i run two nodes with distributed and ha bricks. so when i choose to distributed it is basically the same as single node.
19:44 natarej_ a single seperate node
19:44 natarej_ to make sure its not something with your specific cluster
19:44 badone joined #gluster
19:46 natarej_ i wonder why everyone uses replicate instead of parity?
19:46 natarej_ even on a 4 node setup you're going have 50% more volume
19:46 natarej_ i mean
19:46 natarej_ replicate instead of disperse
19:47 natarej_ does it not work?
19:47 natarej_ i can only find very little info on it
19:47 Romeor i've started when it was 3.4.2
19:47 Romeor there was replicated and distributed and their combinatios
19:48 Romeor not going to break this cluster :)
19:48 [o__o] joined #gluster
19:48 natarej_ is it production?
19:48 Romeor yep
19:49 natarej_ im really apprehensive about using gluster in production
19:49 DV_ joined #gluster
19:49 Twistedgrim joined #gluster
19:49 LebedevRI joined #gluster
19:49 natarej_ well.  thats the point of the lab.
19:49 * Romeor also now
19:49 wkf joined #gluster
19:49 Romeor now i'm really thinking of ceph
19:50 natarej_ yeah, but ceph is very slow.
19:50 Romeor as glusterfs is not really product one can call STABLE...  such weird problems and so poor devs reaction.
19:50 Romeor at least it is stable :D
19:50 natarej_ it's come a long way
19:51 natarej_ im buying all this junk to test ceph and gluster back to back
19:51 natarej_ and a few other bits and peices but its mostly storage.
19:52 natarej_ so far i'm leaning towards gluster
19:52 Romeor well, mby it is not really gluster's fault.. at least proxmox devs are not reacting also
19:52 natarej_ i think running VMs on ceph will just be too slow.
19:52 coredumb joined #gluster
19:54 natarej_ do you have a proxmox subscription?
19:54 Romeor no.
19:55 Romeor and i will not buy one with such attitude ever
19:55 natarej_ its logical to assume its your filesystem if it works from local though
19:55 natarej_ and you're using something out of the box like gluster
19:56 Romeor every other single distro installs fine on same gluster. just getting ready to install centos 7
19:56 Romeor today ubuntu 14.04 lts was fine
19:56 Romeor it uses almost same kernel as d8
19:57 Romeor i think there has something changed in d8 with networking and proxmox got some problems with qemu net drivers.
19:57 Romeor but devs are so lazy to check that out
19:58 Romeor it was great product when it was pure community driven.
19:58 Romeor but like always money break everything
19:59 natarej_ i'm sure if you had a red hat & proxmox subscription they'd sort it out for you :)
19:59 Romeor won't be surprised if they decide to close that project as after they changed to subscription-type product with their nag-screen if one does not have one lots of community member run away
20:00 Romeor i'm sure, if i had both those subscription, i would run MS hyper-v better
20:00 Romeor or vmware sphere
20:00 lezo joined #gluster
20:01 Romeor these products (rh and proxmox) live for community costs (time for debugging and patches and other time of help) and then just sell what they got for free :)
20:02 maZtah joined #gluster
20:02 Romeor there are lots of bugs fixed thanks to centos community and glusterfs community :)
20:03 Romeor and in my situation, when it will be clear, that it is glusterfs fault (in an example), the RH would say: mkay, then it seems we don't support D8 on out gluster :D and will wait until some1 from community smarter than me contributes some kind of patch
20:04 Romeor i know how thos subscription things work. been there, did that.
20:09 natarej_ it in all likelyhood could be an issue with D8
20:11 mckaymatt joined #gluster
20:12 mckaymatt joined #gluster
20:18 calavera joined #gluster
20:22 Romeor but d8 devs do not react also :D
20:23 Romeor so it like to be alone on battlefield
20:23 Romeor so pls mail me when u'll get ready ur lab
20:23 Romeor with 3.6.4 glusterfs
20:28 dgbaley natarej_: did you see this that I posted yesterday on ceph vs. gluster: https://fio.monaco.cx
20:34 DV__ joined #gluster
20:34 sage joined #gluster
20:35 Romeor oh.. lol. no to ceph
20:35 Romeor thanx dgbaley
20:36 cholcombe i donno about 3.7.X anymore.. The quota translator seems pretty busted
20:36 cholcombe it's only picking up changes sometimes and only on some machines
20:36 cyberswat joined #gluster
20:36 cholcombe whereas 3.6 was perfect
20:49 wushudoin| joined #gluster
20:54 wushudoin| joined #gluster
20:56 _maserati joined #gluster
20:57 _maserati is it cool to ask questions about gluster here or should i hit up forums?
20:59 JoeJulian Here's good.
21:01 fiber0pti joined #gluster
21:01 _maserati I inherited a gluster environment and am rather new to it. I have two data centers I need to keep data sync'd across as fast as possible. Should I be looking into geo-replication or just treating the gluster server on site B as a replicated brick? (our address space looks like a LAN between the two)
21:15 julim joined #gluster
21:19 cholcombe _maserati: if your connection is low enough latency wise just treat it as a brick
21:21 _maserati as contingency i must ask, if i have to go geo-replication route, how quickly can i expect a file to be sent to site B? does it happen per file write or in batches every x mins?
21:22 cholcombe i actually don't remember.  it runs off inotify i think
21:22 cholcombe but i don't remember if it batches it
21:22 _maserati and actually i'd like to clear something up, if i treat site-b as a brick: if I do a file write at Site A, does the write flag as successful once it's stored at site A or do i have to wait for it to also get to Site B before the write is successful?
21:23 cholcombe you have to wait for both sites to ack the write
21:23 _maserati damn, so geo-rep would be the only alternative correct?
21:23 JoeJulian batches every 5(?) minutes by default, iirc.
21:23 cholcombe _maserati: how bad is your latency?
21:24 _maserati let me check, 1 sec
21:24 _maserati 12ms
21:24 cholcombe heh
21:24 cholcombe add it as a brick
21:25 cholcombe if you can stand a 24ms wait for your ack i'd just add it
21:25 cholcombe i mean *12ms
21:25 nage joined #gluster
21:26 _maserati there's a chance I can't... our developers code isnt the best and it will lock up if a full file doesnt get ack'd quick enough. So last question i got, if i do geo-replication, does that work both ways? if i have a client at site-b write to site-b's gluster server, will that data replicate back to site a?
21:26 JoeJulian Nope, it's unidirectional (for now).
21:26 cholcombe i believe it's one way
21:27 _maserati okay. thank you guys a bunch. cleared up alot for me already
21:27 cholcombe _maserati: anything higher than 5ms and it's going to start to feel sluggish
21:27 cholcombe but 12 isn't too bad :)
21:27 _maserati my prob is it's usually a ton of little files >.<
21:28 cholcombe shit
21:28 cholcombe that's gonna suck then
21:28 _maserati yup
21:28 cholcombe can you cache and batch your writes?
21:28 _maserati that's pretty much what im about to go tell the developers
21:28 cholcombe maybe you can enable writeback on your cluster if your clients don't need to immediately read the data
21:29 _maserati That's interesting, what can i search for in the manual to read more on that? as in where does writeback come into play?
21:29 _maserati gluster client or server?
21:29 cholcombe it would be on the server
21:30 _maserati okay cool. thanks very much
21:30 cholcombe http://www.gluster.org/community/documentation​/index.php/Translators/performance/writebehind
21:32 cholcombe JoeJulian, do you know if in 3.6+ gluster changed over to displaying peers with dns names instead of IP's?
21:32 cholcombe i'm adding my peers as ip and they're showing up under whatever dns name resolves them to
21:34 ndevos cholcombe: this was a feature for 3.6 that might be related: http://www.gluster.org/community/documentation​/index.php/Features/Better_peer_identification
21:35 cholcombe thanks.  that's very helpful :)
21:35 ndevos at the bottom of that page there are some links to emails about it, not sure if the page itself addresses sufficient details
21:37 cholcombe yeah i'll check out the emalis
21:37 cholcombe it seems like if i add by ip and dns works it goes with the dns name
21:43 cholcombe ndevos: i think i see the problem in my code.  I'm trying to match ip's to hostnames by accident.  :)
21:45 ndevos cholcombe: ah, yes, without resolving it'll be difficult to compare :)
21:45 cholcombe haha
21:45 cholcombe i didn't realize until it started throwing errors
21:55 calavera joined #gluster
22:05 Romeor WooooHooo! its 01:03 here and it seems to me like a little win in my battle. it seems like there is no my problem in latest proxmox kernel with debian 8 !!!! but it only seems so. at least i've managed to install d8 and mate without problems. will test in the morning for more. I really hope it was just kernel and d8 will work fine
22:06 JoeJulian Yay!
22:06 _maserati 1:03? where u live at?
22:06 JoeJulian Nice win, Romeor
22:07 Romeor Estonia, its emmm.. north-eastern europe
22:07 Romeor JoeJulian: i hope it is...
22:07 _maserati haha i know where estonia is, that's cool! I'm in Colorado, it's somewhere in the US =P  (16:07 here)
22:08 ndevos Romeor: oh, that would be something... really hope the cause is identified soon
22:08 Romeor i'm too sleepy for now on. but if it will be fine, i'll close bug and.. yay! i'll get some extra beer this weekend
22:10 Romeor i'm dreaming of visiting florida.
22:10 Romeor some day i will :D
22:11 davidbitton joined #gluster
22:11 * ndevos is on a bar on the 14th floor in Berlin atm, view is okay, drinks are very good ;-)
22:12 Philambdo joined #gluster
22:12 Romeor go to munich! i was there last summer
22:14 _maserati i'm going to stuttgart this summer!
22:14 Romeor we should schedule next triage meeting in munich.
22:15 Romeor it seem like all roads lead to germany
22:18 Romeor deem, if it will be kernel the cause of my problem... how do i explain guys from multimedia, that i have to restart their proxmox node with 14 VM that do transcoding with vlc and ffmpeg :D
22:19 Romeor they transcode live...
22:20 _maserati "Somebody accidently tripped the power cord, sorry everybody! It'll be right back up!"
22:21 Romeor its behind huge ups and gene backup.
22:21 jbautista- joined #gluster
22:21 JoeJulian have a second server and live-migrate...
22:22 JoeJulian Isn't that why we do this?
22:22 Romeor with double reserved PSU
22:22 Romeor migrating sometimes drops some processes
22:23 Romeor of course there is a cluster with proxmox and fencing is done
22:24 _maserati we use vmware here, never had a process drop with vmotion
22:24 Romeor oh i meant tcp sessions are sometimes droped
22:25 ndevos munich? come to Amsterdam :)
22:25 Romeor been there las summer also
22:26 Romeor didn't like the smell of weed every freaking place in center
22:26 ndevos ah, yeah, thats the tourists, I dont like those too much either...
22:26 ndevos :P
22:28 Romeor and as a surprise.. almost every 2ns whore in red lights district is russian or ukrainian..
22:28 ndevos I think thats for tourists too...
22:30 ndevos I'm happy to meet in Amsterdam, or, probably anywhere reasonably reachable in .nl and talk Gluster :)
22:31 ndevos tourist attractions can be fun too, but well, thats probably not a (single) good reason to meet
22:32 Romeor okay. last tequila shot is made.. going to bed. nn everyone
22:34 ndevos cya Romeor!
22:39 jbautista- joined #gluster
22:46 theron_ joined #gluster
22:49 victori joined #gluster
23:02 Rapture joined #gluster
23:03 wkf joined #gluster
23:09 shyam joined #gluster
23:12 kshlm joined #gluster
23:18 wkf joined #gluster
23:19 wkf joined #gluster
23:21 wkf joined #gluster
23:26 gildub joined #gluster
23:27 calavera joined #gluster
23:37 davidbitton joined #gluster
23:38 wkf joined #gluster
23:38 ghenry joined #gluster
23:38 ghenry joined #gluster
23:38 marcoceppi joined #gluster
23:38 marcoceppi joined #gluster
23:42 wkf joined #gluster
23:43 wkf joined #gluster
23:45 ira joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary