Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 jbd1 joined #gluster
00:33 tdasilva joined #gluster
00:48 mjsmith2 joined #gluster
00:53 chirino joined #gluster
00:53 DV_ joined #gluster
00:56 gdubreui joined #gluster
00:58 yinyin joined #gluster
01:45 bala joined #gluster
01:50 DV__ joined #gluster
02:01 Ark joined #gluster
02:07 harish joined #gluster
02:24 harish joined #gluster
02:26 sadbox joined #gluster
02:32 MugginsM I have a  Gluster 3.4.2  two server replica and am having a lot of trouble with folders being created with mismatched gfids
02:32 MugginsM When it happens (several times a day) the parent folder generates an I/O error on clients and often their gluster mounts vanish or appear empty
02:33 MugginsM could creating a folder with the same name on both servers at the same time cause something like this?
02:37 MugginsM this seems to have started happening recently, as our load has increased
02:44 yinyin joined #gluster
02:53 saurabh joined #gluster
02:54 sjm joined #gluster
03:06 kdhananjay joined #gluster
03:08 mjsmith2 joined #gluster
03:13 kshlm joined #gluster
03:16 dusmant joined #gluster
03:30 glusterbot New news from newglusterbugs: [Bug 1099294] Incorrect error message in /features/changelog/lib/src/gf-history-changelog.c <https://bugzilla.redhat.co​m/show_bug.cgi?id=1099294>
03:31 DV_ joined #gluster
03:39 kanagaraj joined #gluster
03:40 RameshN_ joined #gluster
03:40 RameshN joined #gluster
03:45 shubhendu joined #gluster
03:46 kumar joined #gluster
03:47 RameshN__ joined #gluster
03:49 vimal joined #gluster
03:51 itisravi joined #gluster
03:54 chirino_m joined #gluster
03:55 vpshastry joined #gluster
04:11 bharata-rao joined #gluster
04:11 ppai joined #gluster
04:19 ngoswami joined #gluster
04:27 sahina joined #gluster
04:33 lalatenduM joined #gluster
04:35 psharma joined #gluster
04:36 dusmant joined #gluster
04:41 haomaiwa_ joined #gluster
04:46 atinmu joined #gluster
04:54 chirino joined #gluster
04:57 sjm left #gluster
04:58 DV_ joined #gluster
05:04 nshaikh joined #gluster
05:07 davinder joined #gluster
05:09 mjsmith2 joined #gluster
05:16 ravindran1 joined #gluster
05:16 nishanth joined #gluster
05:20 bala joined #gluster
05:21 rastar joined #gluster
05:22 hagarth joined #gluster
05:26 prasanthp joined #gluster
05:32 rejy joined #gluster
05:34 rahulcs joined #gluster
05:34 ndarshan joined #gluster
05:44 raghu joined #gluster
05:53 hagarth joined #gluster
05:53 aravindavk joined #gluster
05:54 dusmant joined #gluster
06:03 vimal joined #gluster
06:08 gdubreui joined #gluster
06:08 kanagaraj joined #gluster
06:09 mjsmith2 joined #gluster
06:14 ricky-ti1 joined #gluster
06:36 nishanth joined #gluster
06:37 hchiramm_ joined #gluster
06:39 meghanam joined #gluster
06:40 bala2 joined #gluster
06:40 karimb joined #gluster
06:45 aravindavk joined #gluster
06:46 RameshN__ joined #gluster
06:48 shubhendu joined #gluster
06:52 ndarshan joined #gluster
06:53 hagarth joined #gluster
06:56 ctria joined #gluster
06:59 andreask joined #gluster
07:00 glusterbot New news from newglusterbugs: [Bug 1099369] [barrier] null gfid is shown in state-dump file for respective barrier fops when barrier is enable and state-dump has taken <https://bugzilla.redhat.co​m/show_bug.cgi?id=1099369>
07:03 hchiramm_ joined #gluster
07:07 Pupeno joined #gluster
07:09 mjsmith2 joined #gluster
07:09 DV_ joined #gluster
07:13 ravindran2 joined #gluster
07:14 ricky-ticky joined #gluster
07:20 shubhendu joined #gluster
07:21 rgustafs joined #gluster
07:21 ndarshan joined #gluster
07:21 keytab joined #gluster
07:21 ktosiek joined #gluster
07:22 nishanth joined #gluster
07:24 bala joined #gluster
07:25 dusmant joined #gluster
07:25 DV_ joined #gluster
07:25 rahulcs_ joined #gluster
07:26 RameshN__ joined #gluster
07:27 vpshastry joined #gluster
07:31 aravinda_ joined #gluster
07:32 nikk__ joined #gluster
07:32 TvL2386 joined #gluster
07:32 hagarth1 joined #gluster
07:32 neoice_ joined #gluster
07:33 Philambdo joined #gluster
07:34 sulky_ joined #gluster
07:36 foster joined #gluster
07:37 ccha2 hi , using 3.5.0, I got on log "[2014-05-20 07:35:13.596683] E [index.c:267:check_delete_stale_index_file] 0-VMS-index: Base index is not createdunder index/base_indices_holder"
07:43 crashmag joined #gluster
07:43 fsimonce joined #gluster
07:50 Pupeno How do I add a brick as replica?
07:52 xymox joined #gluster
07:52 davinder Hello All
07:53 davinder I am new in Gluster ...I want to create gluster volume to access in 10 servers ...most of time its read from that volume
07:53 davinder which method will be best
07:53 davinder need HA also
07:55 samppah Pupeno: gluster volume add-brick volName replica N server:/brick
07:55 samppah davinder: are all 10 servers acting as gluster servers aswell?
07:55 davinder o
07:55 davinder NO
07:55 davinder clients
07:55 davinder two gluster server only
07:55 Pupeno samppah: I'm getting: wrong brick type: replica, use <HOSTNAME>:<export-dir-abs-path>
07:55 davinder 10 are clients to access the volumes
07:56 Pupeno I was told that a distributed volume would automatically become replica if I added replicas.
07:57 samppah Pupeno: can you show your exact command you are using?
07:57 Pupeno Yes.
07:58 samppah davinder: ok, then you have to use replica 2 to create volume
07:58 edward1 joined #gluster
07:58 davinder Creating Replicated Volumes .... is this one ?
07:58 Pupeno samppah: https://gist.github.com/pu​peno/9d10779a9c92d2cba33e
07:58 glusterbot Title: gist:9d10779a9c92d2cba33e (at gist.github.com)
07:58 samppah davinder: yes
07:59 davinder It will any impact on read or write performance of volumes
07:59 davinder ?
07:59 samppah Pupeno: gluster volume add-brick uploads replica 2 revisionist:/var/lib/gluster/brick01
08:00 Pupeno samppah: same errero: https://gist.github.com/pu​peno/9d10779a9c92d2cba33e
08:00 glusterbot Title: gist:9d10779a9c92d2cba33e (at gist.github.com)
08:00 samppah davinder: yes, there is some overhead.. for example client is writing to both servers same time so basicly your write speed is bandwidth / 2
08:00 samppah davinder: also when client is reading it's checking that data is consistent so latency is very important
08:00 xymox joined #gluster
08:01 vpshastry1 joined #gluster
08:01 samppah Pupeno: hmm
08:01 Pupeno I'm running glusterfs 3.2.5 in case this matters.
08:01 samppah Pupeno: what version you are using?
08:01 samppah ok
08:01 samppah is it possible for you to upgrade it?
08:02 Pupeno Maybe.
08:02 davinder For high write rate then which volume setup we should create ?
08:02 samppah davinder: what's your actual use case?
08:03 davinder two use case
08:03 Pupeno I could go to 3.4.2. Can I have a 3.4.2 server with a 3.2.5 client?
08:03 samppah Pupeno: nope :(
08:03 davinder First one : one volume should be available on 10 physical KVM servers so that create multiple guests machines
08:04 davinder gluster FS will have images of OS
08:04 davinder so read operation will happen more
08:05 davinder I will second use case also .... but for upper use case what will be the best practise for KVM/cloud envoirement
08:05 davinder it will all move to openstack cloud
08:06 Pupeno Should I use the packages that come with Ubuntu or the PPA in https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.5
08:06 glusterbot Title: ubuntu-glusterfs-3.5 : semiosis (at launchpad.net)
08:07 samppah davinder: okay, there is some tuning options  to store VM images.. replicate should be fine if you need HA
08:07 samppah Pupeno: i'd recommend PPA
08:08 xymox joined #gluster
08:09 mjsmith2 joined #gluster
08:14 edward2 joined #gluster
08:15 flowouffff is there anyone here for helping a guy struggling with an  gluster-swift error ( http://pastebin.com/mTk2ydLA )  ?
08:15 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
08:16 liquidat joined #gluster
08:16 xymox joined #gluster
08:21 flowouffff i guess that my configuration is right, but i still get this message while I try to get an auth token: Account Server 127.0.0.1:6012/volume_not_in_ring
08:23 xymox joined #gluster
08:25 Pupeno So, can anyone confirm that I can create a volume with only 1 brick, as distributed, and then add a 2 second one and convert it to a replica?
08:25 ngoswami joined #gluster
08:26 gdubreui joined #gluster
08:31 xymox joined #gluster
08:35 Pupeno Errr... I just created a fresh gluster server, and from another machine I mounted it without have to authenticate in any way.
08:36 hagarth flowouffff: ppai might be able to help or else it would be better to hit the gluster-users mailing list
08:37 hchiramm_ joined #gluster
08:37 samppah Pupeno: yes, it's possible to create volume with one brick as distributed and then convert it to replica
08:39 xymox joined #gluster
08:39 ppai flowouffff, have you executed/run gluster-swift-gen-builders command
08:40 ppai flowouffff,
08:40 ppai <flowouffff> is there anyone here for helping a guy struggl
08:40 ppai flowouffff, https://github.com/gluster/gluster-s​wift/blob/master/doc/markdown/quick_​start_guide.md#generate-ring-files
08:40 glusterbot Title: gluster-swift/doc/markdown/quick_start_guide.md at master · gluster/gluster-swift · GitHub (at github.com)
08:40 ProT-0-TypE joined #gluster
08:43 monotek left #gluster
08:43 Slashman joined #gluster
08:45 warci joined #gluster
08:47 xymox joined #gluster
08:54 xymox joined #gluster
08:55 rastar joined #gluster
09:00 shay- joined #gluster
09:01 xymox joined #gluster
09:04 rastar joined #gluster
09:09 beneliott joined #gluster
09:09 xymox joined #gluster
09:09 mjsmith2 joined #gluster
09:15 kdhananjay left #gluster
09:15 kdhananjay joined #gluster
09:15 kdhananjay left #gluster
09:15 circ-user-gxKC5 joined #gluster
09:17 xymox joined #gluster
09:19 ben___ any gluster dev here? have an issue with files not closing properly - if i vim in a file on the client it leaves the .swp file on the client and servers. anyone seen this issue before?
09:25 xymox joined #gluster
09:27 fjsh joined #gluster
09:31 fjsh Hi, we are having a weird issue with gluster volume after upgrade from CentOS 6.2 to 6.5. When I want to list the root of the 'final' volume, I get input/output error. But I can create the files on the volume and if I take a look at the bricks on the other server, the data is correctly replicated.
09:32 fjsh Also it is possible to list the subfolders on the volume (if you know the path) and access the files ther
09:32 fjsh occasionally it throws input/output error but the folders and files are listed.
09:32 xymox joined #gluster
09:32 fjsh we then also tried to upgrade from Gluster 3.4.1 to 3.5, but it did not help
09:32 fjsh Would anybody have any tips what to check?
09:33 fjsh The bricks are ext4 and the final volume is mounted as 'glusterfs' filesystem with 'defaults,_netdev' options
09:33 kdhananjay joined #gluster
09:33 fjsh we also tried it without the _netdev option since we read that on CentOS 6.4 (so maybe also 6.5 is affected) it doesn't have any effect
09:33 fjsh In the logs there are no errors or warnings :/
09:40 xymox joined #gluster
09:41 hchiramm_ joined #gluster
09:47 xymox joined #gluster
09:48 aravinda_ joined #gluster
09:55 ravindran1 joined #gluster
09:55 xymox joined #gluster
09:58 chirino_m joined #gluster
10:02 scuttle_ joined #gluster
10:02 ricky-ti1 joined #gluster
10:03 ravindran2 joined #gluster
10:03 doekia joined #gluster
10:04 xymox joined #gluster
10:08 qdk joined #gluster
10:09 mjsmith2 joined #gluster
10:12 xymox joined #gluster
10:18 yinyin joined #gluster
10:18 ctria joined #gluster
10:19 rastar joined #gluster
10:20 xymox joined #gluster
10:23 ira joined #gluster
10:27 xymox joined #gluster
10:31 jwww_ joined #gluster
10:31 jwww_ Hello.
10:31 glusterbot jwww_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:36 xymox joined #gluster
10:37 bala joined #gluster
10:39 haomaiwa_ joined #gluster
10:43 haomai___ joined #gluster
10:43 spandit joined #gluster
10:43 RameshN joined #gluster
10:44 xymox joined #gluster
10:48 jcsp joined #gluster
10:51 foobar How do I replace a brick in a replicated volume... one of the bricks is dead, disk has been replaced with a new/empty one
10:53 ngong joined #gluster
10:54 haomaiwa_ joined #gluster
10:55 xymox joined #gluster
11:02 xymox joined #gluster
11:02 liquidat joined #gluster
11:03 foobar ah... found it ... 'replace brick ... commit force'
11:06 _Bryan_ joined #gluster
11:08 ngong Hello! I’m looking for some help in restoring some glusterfs volumes to their former glory after updating from ubuntu server 13.10 to 14.04 - is this a good place to ask?
11:09 ngong My volumes have turned into ghosts… they can’t be mounted or modified because they don’t exist - but they can’t be created either since the new volume conflicts with an old one…
11:09 mjsmith2 joined #gluster
11:10 xymox joined #gluster
11:16 doekia joined #gluster
11:17 xymox joined #gluster
11:17 diegows joined #gluster
11:18 rahulcs joined #gluster
11:19 davinder joined #gluster
11:25 kdhananjay joined #gluster
11:26 xymox joined #gluster
11:34 xymox joined #gluster
11:35 karimb joined #gluster
11:42 xymox joined #gluster
11:42 DV_ joined #gluster
11:47 kkeithley1 joined #gluster
11:49 xymox joined #gluster
11:54 mjsmith2 joined #gluster
11:56 xymox joined #gluster
12:03 edward1 joined #gluster
12:03 xymox joined #gluster
12:07 koodough joined #gluster
12:10 xymox joined #gluster
12:14 tdasilva left #gluster
12:15 kshlm joined #gluster
12:17 itisravi joined #gluster
12:18 xymox joined #gluster
12:25 xymox joined #gluster
12:31 glusterbot New news from newglusterbugs: [Bug 1086762] Add documentation for the Feature: BD Xlator - Block Device translator <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086762>
12:34 jwww_ I run gluster 3.3  on two servers .I would like to add high avalaiblity( with something like heartbeat ) , do you guys have suggestions ?
12:36 xymox joined #gluster
12:37 sjm joined #gluster
12:42 primechuck joined #gluster
12:43 xymox joined #gluster
12:44 vpshastry joined #gluster
12:46 Ark joined #gluster
12:47 Pupeno_ joined #gluster
12:47 vpshastry left #gluster
12:48 flowouffff hurray i fixed my swift problems :)
12:48 flowouffff but i still got a weird behavior
12:48 flowouffff my data are stored as hash sub directories
12:49 hagarth joined #gluster
12:50 flowouffff i dont get why yet
12:51 flowouffff for ie: 38d7aabd10270d32efd9ee3daca​e2342/1400589441.47885.data
12:51 flowouffff it's a dummy i've PUT via CURL
12:51 flowouffff +file
12:52 xymox joined #gluster
12:57 lalatenduM JustinClift, how I can be part of infra ML?
12:58 lalatenduM I dont see infra ml in http://www.gluster.org/interact/mailinglists/ hagarth hchiramm_ ^^
12:59 ctria joined #gluster
13:00 xymox joined #gluster
13:00 hagarth lalatenduM: http://www.gluster.org/mail​man/listinfo/gluster-infra
13:00 glusterbot Title: Gluster-infra Info Page (at www.gluster.org)
13:00 lalatenduM hagarth, thanks :)
13:04 ppai flowouffff, that object naming convention is used by swift and not gluster-swift, check if the conf files in /etc/swift are the ones provided by gluster-swift and not swift
13:04 ppai flowouffff, Refer to :https://github.com/gluster/gluster-swift/bl​ob/master/doc/markdown/quick_start_guide.md
13:04 glusterbot Title: gluster-swift/doc/markdown/quick_start_guide.md at master · gluster/gluster-swift · GitHub (at github.com)
13:04 vikhyat joined #gluster
13:06 japuzzo joined #gluster
13:07 sroy_ joined #gluster
13:11 xymox joined #gluster
13:17 dusmant joined #gluster
13:19 xymox joined #gluster
13:20 mjsmith2 joined #gluster
13:23 tdasilva joined #gluster
13:23 plarsen joined #gluster
13:25 jwww_ so guys nobody have high avalaibility file servers with gluster ?
13:26 xymox joined #gluster
13:26 ngong left #gluster
13:29 John_HPC joined #gluster
13:33 andreask joined #gluster
13:34 xymox joined #gluster
13:37 hchiramm_ joined #gluster
13:42 xymox joined #gluster
13:45 lpabon joined #gluster
13:46 cvdyoung joined #gluster
13:51 xymox joined #gluster
13:53 scuttle_ joined #gluster
13:54 kaptk2 joined #gluster
13:58 xymox joined #gluster
14:01 bennyturns joined #gluster
14:06 rahulcs joined #gluster
14:07 xymox joined #gluster
14:09 rahulcs_ joined #gluster
14:09 ron-slc joined #gluster
14:14 xymox joined #gluster
14:20 DV_ joined #gluster
14:22 wushudoin joined #gluster
14:23 xymox joined #gluster
14:31 xymox joined #gluster
14:39 xymox joined #gluster
14:42 hchiramm_ joined #gluster
14:44 kshlm joined #gluster
14:45 jbrooks joined #gluster
14:45 plarsen joined #gluster
14:46 xymox joined #gluster
14:50 coredump joined #gluster
14:50 lalatenduM joined #gluster
14:54 harish joined #gluster
14:54 xymox joined #gluster
15:03 xymox joined #gluster
15:04 Guest48705 joined #gluster
15:04 chirino joined #gluster
15:05 jag3773 joined #gluster
15:05 hchiramm__ joined #gluster
15:10 xymox joined #gluster
15:10 davinder joined #gluster
15:11 flowouffff thx for your reply ppai
15:11 flowouffff i've checked my conf files
15:11 flowouffff they are all using
15:12 flowouffff egg:gluster_swift#<module_name>
15:13 flowouffff i dont get why it's behaving that way
15:13 nikk__ left #gluster
15:14 nikk_ joined #gluster
15:14 nikk_ left #gluster
15:17 xymox joined #gluster
15:20 sprachgenerator joined #gluster
15:23 lalatenduM joined #gluster
15:23 shubhendu joined #gluster
15:27 xymox joined #gluster
15:28 jag3773 joined #gluster
15:28 bennyturns joined #gluster
15:29 lmickh joined #gluster
15:34 vpshastry joined #gluster
15:35 xymox joined #gluster
15:40 cvdyoung Hi, I am getting an error in my glustershd.log saying:
15:40 cvdyoung [2014-05-20 15:36:55.842703] I [rpc-clnt.c:1685:rpc_clnt_reconfig] 0-homegfs-client-0: changing port to 49167 (from 0)
15:40 cvdyoung [2014-05-20 15:36:55.845056] E [socket.c:2161:socket_connect_finish] 0-homegfs-client-0: connection to 10.200.70.1:49167 failed (Connection refused)
15:41 vpshastry left #gluster
15:42 xymox joined #gluster
15:46 koodough joined #gluster
15:51 xymox joined #gluster
15:53 coredumb joined #gluster
15:55 coredumb left #gluster
15:56 coredumb joined #gluster
15:56 jobewan joined #gluster
15:59 xymox joined #gluster
16:04 foster joined #gluster
16:04 chirino_m joined #gluster
16:07 lalatenduM joined #gluster
16:09 xymox joined #gluster
16:11 jbd1 joined #gluster
16:14 mattrixh joined #gluster
16:17 xymox joined #gluster
16:21 JoeJulian jwww_: Yes, I have a suggestion. Throw out that concept and just have highly available files by using a replicated volume.
16:21 lalatenduM_ joined #gluster
16:21 JoeJulian No heartbeat necessary.
16:22 JoeJulian cvdyoung: firewall
16:22 semiosis :O
16:22 JoeJulian o/
16:22 JoeJulian How's it goin'?
16:24 semiosis good.  vacation was long enough, glad to be back
16:24 xymox joined #gluster
16:25 Slashman joined #gluster
16:33 xymox joined #gluster
16:42 xymox joined #gluster
16:47 ramteid joined #gluster
16:49 xymox joined #gluster
16:50 FarbrorLeon joined #gluster
16:55 wushudoin joined #gluster
16:56 \malex\ joined #gluster
16:56 \malex\ joined #gluster
16:57 xymox joined #gluster
16:58 gmcwhist_ joined #gluster
16:59 baojg joined #gluster
17:02 bala joined #gluster
17:02 FarbrorLeon joined #gluster
17:04 FarbrorLeon joined #gluster
17:05 xymox joined #gluster
17:06 ctria joined #gluster
17:07 ben___ joined #gluster
17:08 ben___ nick/ benjaminrobert
17:08 chirino joined #gluster
17:12 xymox joined #gluster
17:14 cfeller joined #gluster
17:14 vpshastry joined #gluster
17:18 vpshastry1 joined #gluster
17:19 xymox joined #gluster
17:26 xymox joined #gluster
17:28 sroy_ joined #gluster
17:30 \malex\ joined #gluster
17:30 \malex\ joined #gluster
17:30 mattapperson joined #gluster
17:35 xymox joined #gluster
17:38 kanagaraj joined #gluster
17:41 ktosiek joined #gluster
17:43 xymox joined #gluster
17:44 vpshastry joined #gluster
17:45 cvdyoung Hey JoeJulian, I checked the FW and it's disabled.  It looks like one of the bricks keeps going offline.
17:45 JoeJulian Check the brick log.
17:47 Mo___ joined #gluster
17:50 cvdyoung aha!
17:50 cvdyoung [2014-05-19 12:46:11.409294] W [server-resolve.c:421:resolve_anonfd_simple] 0-server: inode for the gfid (1728a0f2-d5d8-4011-b6f6-bee7138c731e) is not found. anonymous fd creation failed
17:52 hagarth joined #gluster
17:52 lpabon joined #gluster
17:54 xymox joined #gluster
17:58 \malex\ joined #gluster
17:59 \malex\ joined #gluster
18:02 giannello joined #gluster
18:02 jag3773 joined #gluster
18:02 ben___ hi, i'm on 3.5 - when i vim a file on the client it retains the .swp file (across client and servers). Apart from this the set up looks good to go. Has anyone seen this issue before?
18:03 xymox joined #gluster
18:08 \malex\ joined #gluster
18:08 \malex\ joined #gluster
18:13 xymox joined #gluster
18:15 japuzzo joined #gluster
18:16 cvdyoung I am seeing lots of errors with "Stale file handle" in the brick log
18:16 cvdyoung [2014-05-20 18:15:50.799346] I [server-rpc-fops.c:154:server_lookup_cbk] 0-homegfs-server: 7842395: LOOKUP (null) (fef8447d-18dc-4c6f-9b29-6ab37c0306ad) ==> (Stale file handle)
18:18 in joined #gluster
18:19 \malex\ joined #gluster
18:25 xymox joined #gluster
18:27 ricky-ticky joined #gluster
18:32 Pupeno joined #gluster
18:35 xymox joined #gluster
18:38 baojg joined #gluster
18:42 xymox joined #gluster
18:51 jcsp joined #gluster
18:54 jag3773 joined #gluster
18:55 xymox joined #gluster
19:11 Philambdo joined #gluster
19:17 ricky-ticky joined #gluster
19:27 ricky-ticky joined #gluster
19:30 fyxim_ joined #gluster
19:32 rahulcs joined #gluster
19:36 zerick joined #gluster
19:55 baojg joined #gluster
19:56 ricky-ticky joined #gluster
20:03 glusterbot New news from newglusterbugs: [Bug 1099645] Unchecked strcpy and strcat in gf-history-changelog.c <https://bugzilla.redhat.co​m/show_bug.cgi?id=1099645>
20:03 edward1 joined #gluster
20:05 robert7811 joined #gluster
20:08 gdubreui joined #gluster
20:15 Pupeno joined #gluster
20:16 zaitcev joined #gluster
20:16 baojg joined #gluster
20:21 Pupeno joined #gluster
20:25 saltsa joined #gluster
20:26 rahulcs joined #gluster
20:26 edward2 joined #gluster
20:34 plarsen joined #gluster
20:39 andreask joined #gluster
20:43 jag3773 joined #gluster
20:45 rahulcs joined #gluster
20:58 badone joined #gluster
21:00 cyberbootje joined #gluster
21:06 sijis joined #gluster
21:06 sijis it looks like 1 file didn't get updated but when i run heal, its shows ok
21:06 sijis is there a way to force a sync?
21:07 sijis the timestamp on the files on each node is different
21:07 sijis md5sum is diff too
21:08 sijis trying 'volume heal VOLUMENAME' i see 'has been unsuccessful'
21:11 cvdyoung Is there a limit on number of requests that gluster can take at any one time?  Like simultaneous reads and writes that I can have?
21:12 jmarley joined #gluster
21:12 jmarley joined #gluster
21:16 semiosis cvdyoung: there are limits on the number of open files in linux.  i've never heard of any artifical limits imposed by gluster though
21:16 semiosis cvdyoung: there's practical limits around network & disk io, available memory, etc
21:18 cvdyoung I have the ulimit set to 1024 for open files, but I always thought that if that limit was reached the client side would see the results
21:20 sijis i'm seeing an i/o error on a single file
21:21 Ark joined #gluster
21:21 sijis look at the log, from the lcient, i see something like this "[2014-05-20 21:17:54.895377] W [page.c:991:__ioc_page_error] 0-gbp3-io-cache: page error for page = 0x7f66a4025ce0 & waitq = 0x7f66a401aaa0"
21:21 sijis heal on the volume shows 0
21:24 asku joined #gluster
21:25 ben___ 2 server replicas and the client are leaving .swp files even after closing vim editor on a file. What's happening here?
21:26 ben___ using 3.5
21:32 in Hey guys, I have been running a rebalance on my existing Gluster 3.4.2 setup since beginning of March...during the course of the rebalance I have noticed that the load is 100+ on one of the two new nodes added
21:32 in Today I did a rebalance status and noticed the node with the 100+ load has a "not started" state for the rebalance
21:32 in http://cl.ly/image/281r3t020C1Q
21:32 glusterbot Title: Image 2014-05-20 at 5.31.05 PM.png (at cl.ly)
21:32 n0de any ideas what it may be?
21:41 wushudoin joined #gluster
21:43 calum_ joined #gluster
21:50 lpabon joined #gluster
21:51 firemanxbr joined #gluster
21:57 lmickh joined #gluster
22:04 ctria joined #gluster
22:06 MugginsM joined #gluster
22:10 yinyin joined #gluster
22:14 mattappe_ joined #gluster
22:18 plarsen joined #gluster
22:29 Pupeno joined #gluster
22:39 coredump joined #gluster
23:07 cyberbootje joined #gluster
23:23 sjm joined #gluster
23:23 tdasilva left #gluster
23:30 mattappe_ joined #gluster
23:32 mattap___ joined #gluster
23:38 Ark joined #gluster
23:39 mattappe_ joined #gluster
23:57 mattapperson joined #gluster
23:59 mattappe_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary