Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-11-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:01 calavera joined #gluster
00:03 zhangjn joined #gluster
00:05 plarsen joined #gluster
00:08 gzcwnk joined #gluster
00:10 gzcwnk need some help pls, got a basic gluster server going on ubunty 14, ubuntu 14 client mounts fine but I cant get a RHEL7.1 cleint to mount its saying "mount -t glusterfs glusterp1.ods.vuw.ac.nz:datapoint /storage mount: unknown filesystem type 'glusterfs'"
00:10 Pupeno joined #gluster
00:10 gzcwnk also, "Package glusterfs-3.6.0.29-2.el7.x86_64 already installed and latest version"
00:11 * gzcwnk head scratch
00:14 jmarley joined #gluster
00:19 JoeJulian gzcwnk: yum install glusterfs-fuse
00:19 JoeJulian Oh, and use our yum repo
00:19 JoeJulian @latest
00:19 glusterbot JoeJulian: The latest version is available at http://download.gluster.org/pub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
00:19 JoeJulian Otherwise you're using the Red Hat Storage product and it's not completely compatible.
00:20 lpabon joined #gluster
00:22 hgichon joined #gluster
00:46 cyberbootje joined #gluster
00:47 mlhamburg__ joined #gluster
00:54 kovshenin joined #gluster
01:00 zhangjn joined #gluster
01:01 gnudon joined #gluster
01:01 EinstCrazy joined #gluster
01:01 mlncn Hi all!
01:01 gnudon hello Mr Belo
01:03 pikurasa joined #gluster
01:03 pikurasa hi belo
01:04 pikurasa @gnudon hi
01:06 pikurasa mlncn: hi, this is devin.
01:06 F2Knight joined #gluster
01:08 mlncn A potential client was referred to us who has 1.5 million images managed with Gluster, this is for a web site, and they've been told their performance problems are due to Gluster being good for around 200,000 files.  This sounded like FUD to me.
01:09 JoeJulian yep
01:10 JoeJulian You'll want to tree out the directory structure to keep the number of files in a directory down. php does stupid things when looking in directories for files.
01:10 mlncn Can Gluster handle millions of image files, and if so, does someone want a job setting it up correctly / moving it to better hardware / whatever is actually needed?
01:11 JoeJulian There's a contractor referral on gluster.org
01:12 mlncn Web site is Drupal (which i do know).  Gluster, not so much.  But Drupal can put different types of files in different directories, which could definitely help, and be taught to put/look for files in any directory structure
01:12 bob_saget joined #gluster
01:12 mlncn Thanks, i'll look at that too.  https://www.gluster.org/consultants/ correct?
01:12 glusterbot Title: Professional Support Gluster (at www.gluster.org)
01:14 * bob_saget slaps mlncn around a bit with a large trout
01:14 gzcwnk JoeJulian, yeah figured it, thanks
01:15 gzcwnk hmm trout....yum
01:15 bob_saget I could go for some salmon tbh
01:17 gzcwnk here in NZ you cannot buy trout, only fish for it
01:17 gzcwnk i have not had trout in 20 years
01:18 gzcwnk its illegal to sell it
01:18 bob_saget are you kidding?
01:18 bob_saget holy crap
01:18 JoeJulian Wow
01:18 gzcwnk nope
01:18 gzcwnk its a protected monopoly
01:18 gzcwnk I also need a licence to fish
01:19 gzcwnk annual licence
01:19 gzcwnk :(
01:19 bob_saget :'(
01:19 gzcwnk i can shoot venison for free, but I cant grow my own trout
01:20 bob_saget anyway, I was just here because we were using IRC in a presentation
01:20 bob_saget nice chatting
01:20 bob_saget bye
01:23 bennyturns joined #gluster
01:24 DV joined #gluster
01:26 dblack joined #gluster
01:29 hagarth joined #gluster
01:37 DV joined #gluster
01:42 Lee1092 joined #gluster
01:46 julim joined #gluster
01:47 chirino joined #gluster
01:53 gem joined #gluster
01:57 harish joined #gluster
01:58 nangthang joined #gluster
02:04 mlncn joined #gluster
02:08 johnmark_ joined #gluster
02:16 freescholar joined #gluster
02:35 GB21 joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:55 kdhananjay joined #gluster
02:57 haomaiwa_ joined #gluster
03:00 Pupeno joined #gluster
03:15 zhangjn_ joined #gluster
03:19 harish joined #gluster
03:19 haomaiwa_ joined #gluster
03:38 nishanth joined #gluster
03:38 mlncn joined #gluster
03:50 Doyle joined #gluster
03:51 Doyle Hey. I've got a log 'etc-glusterfs-glusterd.vol.log ' filling up with '[2015-11-05 03:49:24.335947] W [socket.c:588:__socket_rwv] 0-nfs: readv on /var/run/gluster/99802f3b6fe811c18561c096d1bcdf2e.socket failed (Invalid argument)'.   Is there a fix for this?
03:55 overclk joined #gluster
04:01 Doyle glusterfs 3.7.5 built on Oct  7 2015 16:27:17
04:03 kanagaraj joined #gluster
04:03 _maserati_ joined #gluster
04:06 shubhendu_ joined #gluster
04:09 Doyle Found this
04:09 Doyle https://bugzilla.redhat.com/show_bug.cgi?id=1222065
04:09 glusterbot Bug 1222065: high, high, ---, kparthas, CLOSED CURRENTRELEASE, GlusterD fills the logs when the NFS-server is disabled
04:12 atinm joined #gluster
04:15 dusmant joined #gluster
04:20 sakshi joined #gluster
04:23 JoeJulian Doyle: I suspect one of your services died. Just restart glusterd and see if that fixes it.
04:24 rafi joined #gluster
04:25 RameshN joined #gluster
04:28 calavera joined #gluster
04:29 TheSeven joined #gluster
04:29 gem joined #gluster
04:31 calavera_ joined #gluster
04:31 haomaiwang joined #gluster
04:37 nbalacha joined #gluster
04:38 rafi joined #gluster
04:39 rafi joined #gluster
04:59 ndarshan joined #gluster
05:00 Pupeno joined #gluster
05:01 haomaiwang joined #gluster
05:02 ashiq joined #gluster
05:07 pppp joined #gluster
05:10 kotreshhr joined #gluster
05:17 nbalacha joined #gluster
05:18 Manikandan joined #gluster
05:20 hagarth joined #gluster
05:21 calavera joined #gluster
05:22 aravindavk joined #gluster
05:28 neha_ joined #gluster
05:28 vimal joined #gluster
05:32 Bhaskarakiran joined #gluster
05:33 Humble joined #gluster
05:36 doekia joined #gluster
05:41 rafi1 joined #gluster
05:41 zhangjn joined #gluster
05:42 skoduri joined #gluster
05:49 vmallika joined #gluster
05:49 ramteid joined #gluster
05:52 hagarth joined #gluster
05:57 ramky joined #gluster
06:00 RameshN Humble, Can u merge http://review.gluster.org/#/c/11739/ ?
06:00 glusterbot Title: Gerrit Code Review (at review.gluster.org)
06:01 raghu joined #gluster
06:02 hgowtham_ joined #gluster
06:02 hgowtham_ joined #gluster
06:04 rjoseph joined #gluster
06:10 dusmant joined #gluster
06:13 martineg_ joined #gluster
06:16 overclk joined #gluster
06:17 kdhananjay joined #gluster
06:18 DV joined #gluster
06:21 deepakcs joined #gluster
06:26 itisravi joined #gluster
06:31 neha_ joined #gluster
06:35 jiffin joined #gluster
06:41 hagarth joined #gluster
06:42 gem joined #gluster
06:51 jwd joined #gluster
06:52 jwaibel joined #gluster
06:53 jwd_ joined #gluster
06:56 overclk joined #gluster
06:58 kshlm joined #gluster
07:01 nangthang joined #gluster
07:10 haomaiwa_ joined #gluster
07:11 Humble RameshN, its not passed the gluster build system vote.
07:19 neha_ joined #gluster
07:22 ppai joined #gluster
07:35 ramteid joined #gluster
07:56 doekia joined #gluster
07:58 Telsin joined #gluster
08:00 zhangjn joined #gluster
08:01 haomaiwa_ joined #gluster
08:04 kovshenin joined #gluster
08:18 bhuddah joined #gluster
08:20 fsimonce joined #gluster
08:29 Pupeno joined #gluster
08:33 mlhamburg1 joined #gluster
08:38 [Enrico] joined #gluster
08:44 ashiq joined #gluster
08:47 ivan_rossi joined #gluster
08:50 EinstCrazy joined #gluster
08:50 poornimag joined #gluster
08:58 atalur joined #gluster
09:00 glafouille joined #gluster
09:03 lisak joined #gluster
09:06 gildub joined #gluster
09:07 haomaiwang joined #gluster
09:10 lisak hey, is it OK to check whether volume exists by checking "gluster volume info" exit status? https://gist.github.com/l15k4/0c09160d45f178a08f63#file-foo-bash-L30
09:10 glusterbot Title: foo.bash · GitHub (at gist.github.com)
09:11 lisak because it doesn't seem to work, the second node doesn't see the volume, eventhough probing works both directions
09:11 lisak https://gist.githubusercontent.com/l15k4/93f56296a9344e494cc5/raw/343496bd9bebe2acb69bd16f448cee41144b9d2f/gistfile1.txt
09:11 DV joined #gluster
09:14 ctria joined #gluster
09:15 lisak I'm stuck with this for whole day ... it works doing that manually step by step, automating it via systemd  leads to this weird issue and it is not question of timing
09:18 hos7ein joined #gluster
09:20 suliba joined #gluster
09:22 LebedevRI joined #gluster
09:28 chkno joined #gluster
09:32 atinm lisak, no
09:33 atinm lisak, in case cluster has no volumes configured it still returns 0 with a message saying no volumes present but command is treated as success
09:34 lisak I just tested it,   1 = no volumes present,  0 = at least one exists
09:34 lisak the first one also  has  "Volume name does not exist" on stderr
09:35 lisak I meant if you specify the name of the volume
09:36 chkno left #gluster
09:37 arcolife joined #gluster
09:39 lisak anyway I think the problem is caused by the 10-15 seconds long response times...  probing or  volume info requests take 15 seconds and the host becomes inaccessible at the time of querying
09:41 cuqa_ joined #gluster
09:47 haomaiwa_ joined #gluster
09:48 aneale joined #gluster
09:51 [Enrico] joined #gluster
09:51 nbalacha joined #gluster
10:01 haomaiwa_ joined #gluster
10:10 Slashman joined #gluster
10:16 [Enrico] joined #gluster
10:23 zhangjn joined #gluster
10:35 zhangjn_ joined #gluster
10:36 kanagaraj joined #gluster
10:38 GB21 joined #gluster
10:39 skoduri joined #gluster
10:42 XpineX joined #gluster
10:48 EinstCrazy joined #gluster
10:51 harish_ joined #gluster
10:56 shubhendu_ joined #gluster
11:02 ndarshan joined #gluster
11:05 kkeithley1 joined #gluster
11:14 skoduri joined #gluster
11:15 shyam joined #gluster
11:17 glafouille joined #gluster
11:23 cliluw joined #gluster
11:40 wistof Hi, i add an issue with my gluster config (i try to add existing peer as a new one using an other ip)
11:40 wistof the new peer was created with guid 0000-0000.....
11:42 wistof so i try to detach it, but it remove tjhe working one, and now, i've only on peer on my first node (with a bad guid), and on the second node, no peer (since it was detach)
11:42 lpabon joined #gluster
11:42 wistof i would try to reconfigure manually my peers by creatin flat config fiile
11:44 wistof i will first shutdown all gluster client, then shut down, the first node, modify peer config file on the first node, then restart the gluster daemon
11:45 wistof then from the first node, try to probe the peer, to recreate the entry on the second node
11:45 wistof did you think it will work ?  does it missing some steps ?
11:48 lpabon joined #gluster
11:54 hgowtham_ joined #gluster
11:56 mhulsman joined #gluster
12:00 atinm wistof, I am really not sure what you are trying to do here
12:00 atinm wistof,  "i've only on peer on my first node (with a bad guid), and on the second node, no peer (since it was detach)" - this can never happen
12:00 atinm wistof, how did you add a peer to the cluster?
12:01 wistof atinm : i use a 3.27
12:01 atinm what's 3.27?
12:01 atinm do you mean 3.7.2 version of gluster?
12:02 skoduri joined #gluster
12:02 wistof gluster peer probe <ip>
12:03 wistof atinm : gluster 3.2.7
12:03 atinm 3.2.7, its quite a old one
12:03 wistof i know, it's pretty old
12:03 atinm please use latest versions
12:04 wistof yes, i will, but i should get back to my working configuration
12:08 zhangjn joined #gluster
12:09 RameshN joined #gluster
12:13 edwardm61 joined #gluster
12:17 jmarley joined #gluster
12:19 dusmant joined #gluster
12:19 nishanth joined #gluster
12:20 Manikandan joined #gluster
12:27 haomaiwang joined #gluster
12:32 [Enrico] joined #gluster
12:37 pgreg joined #gluster
12:39 zhangjn joined #gluster
12:40 zhangjn joined #gluster
12:42 DV joined #gluster
12:47 archit_ joined #gluster
12:50 mhulsman joined #gluster
12:51 firemanxbr joined #gluster
12:54 gem joined #gluster
13:01 haomaiwa_ joined #gluster
13:02 unclemarc joined #gluster
13:03 Manikandan joined #gluster
13:07 blubberdi joined #gluster
13:08 blubberdi Hey, if I have a replicated volume. Is it safe to backup directly from the brick instead of the gluster mount?
13:13 kotreshhr joined #gluster
13:18 dgandhi joined #gluster
13:24 dusmant joined #gluster
13:28 shubhendu_ joined #gluster
13:30 nishanth joined #gluster
13:30 sghatty__ joined #gluster
13:31 aravindavk joined #gluster
13:32 plarsen joined #gluster
13:32 glafouille joined #gluster
13:40 chirino joined #gluster
13:41 jmarley joined #gluster
13:44 Trefex joined #gluster
13:46 kovshenin joined #gluster
13:47 rafi joined #gluster
13:48 shyam joined #gluster
13:50 ira joined #gluster
13:54 cked350 joined #gluster
14:01 rafi joined #gluster
14:01 haomaiwang joined #gluster
14:02 chirino joined #gluster
14:02 bennyturns joined #gluster
14:04 wistof does is secure to run 2 nodes over internet  ?  glusterfs protocol is encrypted  ?
14:05 kkeithley_ @encryption
14:06 kkeithley_ @repos
14:06 glusterbot kkeithley_: See @yum, @ppa or @git repo
14:06 kkeithley_ it's encrypted if you enable it.
14:06 wistof ok, i will check the doc
14:07 B21956 joined #gluster
14:11 Bhaskarakiran joined #gluster
14:13 overclk joined #gluster
14:13 blubberblase42 joined #gluster
14:16 shyam joined #gluster
14:21 Manikandan joined #gluster
14:21 rafi joined #gluster
14:32 neha_ joined #gluster
14:36 overclk joined #gluster
14:37 glafouille joined #gluster
14:43 zerick joined #gluster
14:44 rwheeler joined #gluster
14:46 samppah joined #gluster
14:46 lalatenduM joined #gluster
14:47 DJClean joined #gluster
14:47 partner joined #gluster
14:47 rastar_afk joined #gluster
14:49 skylar joined #gluster
14:57 Manikandan joined #gluster
14:58 sghatty__ 1. what is the expected behvaior? Is the cluster.enable-shared-storage command expected to create shared storage? It seems odd to return a success message without creating the shared volume. 2. Any suggestions on how to get past this problem? https://www.irccloud.com/pastebin/NQoXfPCj/
14:58 glusterbot Title: Pastebin | IRCCloud (at www.irccloud.com)
14:59 sghatty__ All, I need your help configuring the my glusterfs-Ganesha in HA Active Active mode.
14:59 sghatty__ please see above pastebin
15:00 bluenemo joined #gluster
15:00 Manikandan joined #gluster
15:01 mlncn joined #gluster
15:01 haomaiwa_ joined #gluster
15:05 rafi joined #gluster
15:10 creshal joined #gluster
15:14 creshal Are there any "tweaking knobs" for gluster apart from cache size/io thread count? I'm getting less than 2% of the bare metal performance in some scenarios, which strikes me as awfully low.
15:25 creshal Ex., I get 84 tps with postgres inside gluster, and 5864 tps with postgres on bare metal.
15:25 creshal (pgbench -c 16)
15:25 bowhunter joined #gluster
15:28 kotreshhr left #gluster
15:31 gem joined #gluster
15:37 jiffin joined #gluster
15:39 adamaN joined #gluster
15:40 B21956 left #gluster
15:40 adamaN joined #gluster
15:41 adamaN joined #gluster
15:42 adamaN joined #gluster
15:42 Gill joined #gluster
15:47 RameshN joined #gluster
15:48 rafi joined #gluster
15:52 ayma joined #gluster
15:52 rafi joined #gluster
16:01 haomaiwa_ joined #gluster
16:06 cholcombe joined #gluster
16:13 creshal Well, I can get up to 10% performance by tuning postgres itself, but that's still pitiful.
16:18 kshlm joined #gluster
16:18 JoeJulian Wow, I've not tried postgres. Sounds like it does a lot of very small writes. Obviously if it's not filling up MTU, there's going to be a significant amount of network overhead.
16:19 JoeJulian You also have context switches over fuse. Whichever database finally provides a vfs layer that can use libgfapi will certainly have an advantage.
16:19 cornfed78 joined #gluster
16:21 creshal Whatever database it is, it'll have to work with Django and provide both JSON cols and a classic RDBMS. That doesn't leave much, if any, alternatives. ;)
16:24 JoeJulian I've always used mysql with django
16:25 JoeJulian But, to be fair, I don't have anything that really uses my django stuff heavily.
16:27 creshal We kinda need JSON cols, and I don't think Django supports them yet with MySQL. Even for postgres we need a 1.9-prerelease.
16:32 lkoranda_ joined #gluster
16:32 csaba1 joined #gluster
16:33 JoeJulian creshal: I'm reading up on postgres storage use. One thing that I see is that postgres segments files that exceed 1G. This is really useful if you have a distributed volume. Would be even better if you could figure out a way to ensure that each segment lands on a different dht subvolume.
16:34 creshal Right now I only have a replicated volume with 2 hosts, nothing fancier. :D
16:34 JoeJulian Move temporary files off gluster and onto fast storage, or ensure you're configured such that temporary tables are always created in memory.
16:35 creshal At that point I'd rather just skip gluster completely and set up regular postgres hot standby on bare metal.
16:36 JoeJulian I don't understand the correlation.
16:37 RameshN joined #gluster
16:37 JoeJulian I see they use 8k pages. That's good as long as make sure your mtu is large enough.
16:37 creshal If I can't put the whole container into gluster, it makes no sense for me to bother with the setup overhead. Postgres' replication is easier to use than a setup where half of my container is in gluster and half outside.
16:38 JoeJulian Ah, I see what you're saying.
16:38 JoeJulian temp tables to disk are inherently slow anyway. I would always try to configure to avoid them.
16:39 creshal That's how I got from 1.5% to 10% performance. :D
16:40 JoeJulian Eager locking might help. It doesn't say here if pg closes the FDs between ops. I wouldn't think so but that might be worth investigating.
16:41 atalur joined #gluster
16:42 lkoranda joined #gluster
16:42 JoeJulian Try enabling network.remote-dio and cluster.eager-lock. Also test individually disabling performance.io-cache, performance.stat-prefetch, and performance.read-ahead. Those are all options that are used with vm images and I would kind-of expect similar storage requirements.
16:44 creshal Is there a documentation for all those settings?
16:45 JoeJulian "gluster volume set help" has a decent overview.
16:45 creshal Ahhh.
16:46 JoeJulian http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Performance%20Testing/ might be of interest.
16:46 glusterbot Title: Performance Testing - Gluster Docs (at gluster.readthedocs.org)
16:51 csaba1 joined #gluster
16:52 johnmark_ joined #gluster
16:52 lezo_ joined #gluster
16:53 r0di0n1 joined #gluster
16:56 jbrooks joined #gluster
16:57 ashka` joined #gluster
16:57 malevolent_ joined #gluster
16:57 purpleid1a joined #gluster
16:59 yosafbridge` joined #gluster
16:59 a2_ joined #gluster
17:01 badone_ joined #gluster
17:02 haomaiwa_ joined #gluster
17:05 aneale Hi, I have a problem using samba-vfs-glusterfs with gluster set to replica 4 where all reads come from a single brick - Is that expected behaviour as the fuse mount reads from all bricks?
17:07 bennyturns joined #gluster
17:08 JoeJulian Even fuse usually reads from one by default. It's "first available" where it sends out the queries starting with 1. That's tunable using cluster.read-hash-mode.
17:13 aneale In my testing it is multiple reads from multiple files all from a single brick with samba-vfs-glusterfs, all 4 bricks when using fuse - observed by gluster profiling.
17:15 creshal Derp. I've tried all knobs I could find, and get at best 15% bare metal performance. I'm going to cut my losses and just use postgres' replication without involving gluster at all. At least it'll still be useful for the non-database containers.
17:17 rafi joined #gluster
17:18 hagarth joined #gluster
17:18 ira joined #gluster
17:23 calavera joined #gluster
17:27 kanagaraj joined #gluster
17:37 shyam joined #gluster
17:40 vimal joined #gluster
17:44 ivan_rossi left #gluster
17:48 RameshN_ joined #gluster
17:48 Rapture joined #gluster
17:54 freescholar joined #gluster
18:01 haomaiwa_ joined #gluster
18:12 DavidVargese joined #gluster
18:23 RameshN__ joined #gluster
18:28 haomaiwang joined #gluster
18:28 ro_ joined #gluster
18:31 skylar joined #gluster
18:33 skylar joined #gluster
18:37 nishanth joined #gluster
18:39 RameshN_ joined #gluster
18:41 RayTrace_ joined #gluster
18:43 Gill_ joined #gluster
18:43 cte-st-g joined #gluster
18:46 mlhess joined #gluster
18:46 ndevos joined #gluster
18:46 ndevos joined #gluster
18:46 deni joined #gluster
18:46 d-fence joined #gluster
18:46 Gill joined #gluster
18:46 dastar joined #gluster
18:51 aneale Having run a fresh 300 second test using 4 bricks; samba-vfs-glusterfs spread the read calls across the bricks 0:0:0:13815 whereas fuse spread the read calls 7887:7490:7853:7903 - resulting in significantly lower ops/s on the samba-vfs-glusterfs implementation. I have tested all settings of cluster.read-hash-mode with a volume stop and start each time without any change in experience. Is there any other suggestions?
18:53 RedW joined #gluster
18:55 PatNarciso joined #gluster
19:00 JoeJulian Weird. Please file a bug report.
19:00 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
19:00 JoeJulian Could it possibly be a connection issue with the samba server?
19:01 haomaiwa_ joined #gluster
19:07 cliluw joined #gluster
19:14 shaunm joined #gluster
19:22 jwd joined #gluster
19:24 Vaelatern joined #gluster
19:24 mlncn joined #gluster
19:24 dblack joined #gluster
19:29 cliluw joined #gluster
19:29 aneale I don't think it is a connection issue with Samba as it's installed on one of the glusterfs peers and it's random as to which brick it chooses to use. I will file the bug report - Thanks for your help.
19:31 rich0dify joined #gluster
19:32 gzcwnk joined #gluster
19:39 jbrooks joined #gluster
19:41 krypton joined #gluster
19:47 krypton any idea how Gluster replication works ?
19:47 krypton I went over nice how-tos i could find at : http://blog.gluster.org/category/geo-replication/
19:48 krypton but still not able to figure out how the replication happens and what if my source volumes are actively used when i setup the replication
19:49 krypton do I risk any data corruptions or do I have to take any more steps to ensure data consistency
19:55 JoeJulian It's essentially a targeted rsync.
19:56 p0rtal joined #gluster
19:56 JoeJulian referring to geo-replication as you referenced the documentation for that.
19:56 JoeJulian as files change, those files are tagged for update. When the next cycle happens, they are rsynced to the remote site.
19:57 krypton so if there are snapshots which I need on one site, they need to available
19:57 F2Knight joined #gluster
19:57 krypton or I might want to trigger an cycle when I have snapshot
20:00 jobewan joined #gluster
20:01 haomaiwa_ joined #gluster
20:06 ghenry joined #gluster
20:27 gzcwnk anybody have some stats / idea how bad gluster performs v NFS or a single local disk?
20:29 kovshenin joined #gluster
20:31 F2Knight_ joined #gluster
20:41 krypton thanks JoeJulian
20:42 cte-st-g left #gluster
20:45 F2Knight joined #gluster
20:47 mhulsman joined #gluster
20:56 lpabon joined #gluster
20:58 archit_ joined #gluster
21:01 haomaiwa_ joined #gluster
21:03 calavera joined #gluster
21:07 klaxa joined #gluster
21:10 timotheus1 joined #gluster
21:28 ayma joined #gluster
21:28 arcolife joined #gluster
21:32 David_Vargese joined #gluster
21:42 arcolife joined #gluster
21:50 DavidVargese joined #gluster
21:58 kovshenin joined #gluster
21:59 Dragotha joined #gluster
22:01 haomaiwa_ joined #gluster
22:04 kovshenin joined #gluster
22:14 rich0dify joined #gluster
22:16 gzcwnk joined #gluster
22:19 gzcwnk anyone in? I have absysmal performance on gluster comapred to local disk trying to find out why
22:19 JoeJulian Because it's not local disk.
22:21 gzcwnk fair enough, but this is so bad its un-usable, I thought glsuter was a high preformance thing
22:22 gzcwnk nfs leaves it for dead
22:22 gzcwnk so either im doing something wrong, I have a hw issues or mis-config or its not up to what I need
22:25 glusterbot` joined #gluster
22:27 ayma joined #gluster
22:33 dblack joined #gluster
22:35 Dragotha left #gluster
22:49 glusterbot joined #gluster
22:52 calavera joined #gluster
23:01 haomaiwang joined #gluster
23:02 harish joined #gluster
23:16 gildub joined #gluster
23:28 calavera joined #gluster
23:32 wchance joined #gluster
23:33 wchance I am having issue with FreeNAS rebooting every few weeks and would like to test with GlusterFS
23:33 wchance I also would like to support INFINIBAND and FIBRECHANNEL
23:34 wchance seems that QLOGIC has no drivers for Debian and I might be better off installing REDHAT
23:35 calavera joined #gluster
23:35 JoeJulian consider Archlinux
23:35 wchance Why archlinux?
23:36 JoeJulian I've been using it for 8 months now. It's nice.
23:36 JoeJulian And it's definitely going to support modern hardware.
23:37 wchance Do you use infiniband or fibrechannel?
23:39 JoeJulian I don't. We have way too much 10gig to replace any of it.
23:40 JoeJulian Plus, I define the SLA so I don't need the lower latency or the higher throughput.
23:40 JoeJulian Who knows, though. Maybe the next rev we might do that with ssd arrays.
23:41 wchance ok
23:45 jobewan joined #gluster
23:49 p0rtal wchance: I assume Infiniband vendor (Mellanox ?) would have drivers out for prevalent distributions in the Linux enterprise world
23:51 gzcwnk Redhat will cost
23:51 gzcwnk surprised debian has no qlogic as its a kernel thing
23:52 wchance I am using FreeNAS and Proxmox and neither support qlogic fibrechannel
23:53 F2Knight joined #gluster
23:53 wchance so I am going to try different OS like Centos to see if it supports there are REDHAT drivers for it so hoping it will work with Centos
23:55 gzcwnk its a kernel thing, I used to use the base kernel into the SAN
23:55 gzcwnk so you odnt need redhat drivers I think, I never used them
23:56 calavera joined #gluster
23:59 julim joined #gluster
23:59 martinetd joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary