Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 dneary joined #gluster
00:22 _pol joined #gluster
00:40 bayamo joined #gluster
00:56 _pol joined #gluster
00:56 bayamo joined #gluster
01:00 vpshastry joined #gluster
01:42 joshit_ joined #gluster
01:44 joshit_ i need some serious help
01:45 joshit_ running postfix, dovecot, spamassassin, amavis, 147 accounts on a gluster 3.3 mounted brick
01:46 joshit_ i'm getting alot of Error open and pread errors
01:46 joshit_ can anyone help me?
01:54 sgowda joined #gluster
01:55 bala joined #gluster
01:57 druonysus joined #gluster
01:57 druonysus left #gluster
02:09 bharata-rao joined #gluster
02:13 kevein joined #gluster
02:16 ninkotech_ joined #gluster
02:16 ninkotech joined #gluster
02:20 jmeeuwen joined #gluster
02:22 bayamo left #gluster
02:33 purpleidea joshit_: i don't know who can help you, but i know that if you don't provide a lot more information, details and logs, then nobody can.
02:34 joshit_ http://pastebin.com/Yi1VfKXh
02:34 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
02:35 joshit_ http://paste.ubuntu.com/6298505/
02:35 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
02:35 joshit_ i assume maybe gluster, maybe ext4 permission levels?
02:36 joshit_ vmail dir runs completly off gluster mount
02:40 bulde joined #gluster
02:42 micu joined #gluster
02:49 joshit_ wheres julian when you need him :P
02:57 DV__ joined #gluster
02:58 bala joined #gluster
03:09 bala joined #gluster
03:09 bala joined #gluster
03:21 hagarth joined #gluster
03:24 shubhendu joined #gluster
03:25 RameshN joined #gluster
03:29 purpleidea joshit_: i looked at your paste, but you haven't provided a lot of information. you need to provide the corresponding gluster logs when this is happening...
03:44 Shdwdrgn joined #gluster
03:47 shylesh joined #gluster
03:49 itisravi joined #gluster
03:56 mrfsl joined #gluster
04:06 mrfsl joined #gluster
04:17 dusmant joined #gluster
04:21 joshit_ http://paste.ubuntu.com/6298810/
04:21 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
04:21 joshit_ logs also state input/output error
04:21 joshit_ i'm assuming gluster has corrupted all index files
04:23 joshit_ dovecot.index, dovecot.index.log and dovecot.uilist all same error
04:29 ababu joined #gluster
04:30 mrfsl joined #gluster
04:38 _pol joined #gluster
04:40 _pol joined #gluster
04:46 ppai joined #gluster
04:50 aravindavk joined #gluster
04:53 mrfsl joined #gluster
04:56 _pol joined #gluster
04:57 kanagaraj joined #gluster
05:07 vpshastry1 joined #gluster
05:09 mrfsl left #gluster
05:14 CheRi joined #gluster
05:15 psharma joined #gluster
05:18 CheRi joined #gluster
05:18 ziiin joined #gluster
05:26 ndarshan joined #gluster
05:26 root joined #gluster
05:27 kshlm joined #gluster
05:27 rtm do i mount target from multiple storage nodes to configure a no-SPOF setup
05:31 rtm like for example mount -t glusterfs node1.mynet.com:/gvol1 node2.mynet.com:/gvol1 /mnt/gfs
05:33 shilpa_ joined #gluster
05:34 rtm exit
05:37 samppah @tell
05:37 glusterbot samppah: (tell <nick> <text>) -- Tells the <nick> whatever <text> is. Use nested commands to your benefit here.
05:38 samppah @later
05:38 glusterbot samppah: I do not know about 'later', but I do know about these similar topics: 'latest'
05:39 samppah @tell rtm you can use -o backupvolfile-server to specify another server to use for mounting. ie. mount -t glusterfs -o backupvolfile-server=node2.mynet.com node1.mynet.com:/gvol1 /mnt/gfs
05:39 glusterbot samppah: Error: I haven't seen rtm, I'll let you do the telling.
05:39 samppah okiedokie
05:40 samppah @help
05:40 glusterbot samppah: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin.
05:40 rjoseph joined #gluster
05:42 vshankar joined #gluster
05:51 lalatenduM joined #gluster
05:51 raghu joined #gluster
05:53 _pol joined #gluster
05:54 vmallika joined #gluster
06:02 kshlm joined #gluster
06:04 bala joined #gluster
06:05 bala joined #gluster
06:06 satheesh1 joined #gluster
06:07 nshaikh joined #gluster
06:08 anands joined #gluster
06:10 _pol joined #gluster
06:17 rastar joined #gluster
06:24 aravindavk joined #gluster
06:24 satheesh joined #gluster
06:25 ndarshan joined #gluster
06:25 ricky-ticky joined #gluster
06:25 RameshN joined #gluster
06:25 kanagaraj joined #gluster
06:26 rastar joined #gluster
06:26 shubhendu joined #gluster
06:35 dusmant joined #gluster
06:49 ngoswami joined #gluster
06:49 mohankumar joined #gluster
06:58 bulde joined #gluster
07:00 eseyman joined #gluster
07:05 ctria joined #gluster
07:16 mgebbe joined #gluster
07:23 harish_ joined #gluster
07:25 hngkr_ joined #gluster
07:36 khushildep joined #gluster
07:36 calum_ joined #gluster
07:40 hybrid5121 joined #gluster
07:46 bala joined #gluster
07:50 kanagaraj joined #gluster
07:50 andreask joined #gluster
07:50 ndarshan joined #gluster
07:50 aravindavk joined #gluster
07:51 shubhendu joined #gluster
07:51 dusmant joined #gluster
07:58 pkoro joined #gluster
08:04 jbrooks joined #gluster
08:04 keytab joined #gluster
08:14 glusterbot New news from newglusterbugs: [Bug 1023309] geo-replication command failed <http://goo.gl/JKsevM>
08:30 DV joined #gluster
08:43 ndarshan joined #gluster
08:44 sgowda joined #gluster
08:48 Technicool joined #gluster
08:52 shubhendu joined #gluster
08:52 kanagaraj joined #gluster
08:53 dusmant joined #gluster
08:55 bala joined #gluster
08:56 ngoswami joined #gluster
08:59 vimal joined #gluster
08:59 mohankumar joined #gluster
09:05 manik joined #gluster
09:09 msciciel joined #gluster
09:12 vmallika joined #gluster
09:12 dneary joined #gluster
09:14 tryggvil joined #gluster
09:14 glusterbot New news from newglusterbugs: [Bug 963335] glusterd enters D state after replace-brick abort operation <http://goo.gl/pGy04>
09:15 sgowda joined #gluster
09:18 psharma joined #gluster
09:22 aravindavk joined #gluster
09:22 bulde joined #gluster
09:27 bala joined #gluster
09:37 calum_ joined #gluster
09:41 shubhendu joined #gluster
09:50 sgowda joined #gluster
09:53 social I'm getting peer probe: XY failed is already at a higher op-version, when they should have same version
09:53 social how do I find out what is running on older version? I guess some client which blocks it
09:57 aravindavk joined #gluster
10:10 bulde joined #gluster
10:10 hybrid5121 joined #gluster
10:16 andreask joined #gluster
10:22 kkeithley1 joined #gluster
10:27 aravindavk joined #gluster
10:33 _pol_ joined #gluster
10:33 ngoswami joined #gluster
10:44 social I'll answer my self, just change it in /var/lib/glusterd/glusterd.info and restart glusterd
10:45 ProT-0-TypE joined #gluster
10:47 RameshN joined #gluster
10:48 dneary joined #gluster
10:52 psharma joined #gluster
10:55 ProT-0-TypE joined #gluster
11:00 vpshastry1 joined #gluster
11:11 ricky-ticky joined #gluster
11:16 edward1 joined #gluster
11:25 tryggvil joined #gluster
11:47 manik joined #gluster
11:51 manik left #gluster
11:52 manik joined #gluster
12:03 rcheleguini joined #gluster
12:04 bulde1 joined #gluster
12:05 harish_ joined #gluster
12:30 ndarshan joined #gluster
12:32 aravindavk joined #gluster
12:33 jbrooks joined #gluster
12:34 B21956 joined #gluster
12:37 kPb_in_ joined #gluster
12:41 DV joined #gluster
12:45 shubhendu joined #gluster
12:49 RameshN joined #gluster
12:50 Technicool joined #gluster
12:53 ndarshan joined #gluster
12:59 bennyturns joined #gluster
13:01 dneary joined #gluster
13:03 calum_ joined #gluster
13:09 ctrianta joined #gluster
13:21 ndk joined #gluster
13:24 ndarshan joined #gluster
13:26 tryggvil joined #gluster
13:32 lpabon joined #gluster
13:36 rwheeler joined #gluster
13:45 klaxa|work joined #gluster
13:46 shilpa_ joined #gluster
13:52 kaptk2 joined #gluster
13:56 failshell joined #gluster
13:58 klaxa|work where do i increase the loglevel of the self-heal daemon in glusterfs 3.3.2?
14:09 plarsen joined #gluster
14:13 ndarshan joined #gluster
14:14 ctrianta joined #gluster
14:21 wushudoin joined #gluster
14:33 tryggvil joined #gluster
14:46 daMaestro joined #gluster
14:47 lalatenduM joined #gluster
14:47 bugs_ joined #gluster
14:58 eseyman joined #gluster
14:59 soukihei joined #gluster
15:00 soukihei left #gluster
15:00 Guest54292 joined #gluster
15:13 hateya joined #gluster
15:20 vpshastry joined #gluster
15:24 klaxa|work1 joined #gluster
15:36 aravindavk joined #gluster
15:42 LoudNoises joined #gluster
15:42 ira joined #gluster
16:07 andreask joined #gluster
16:08 vpshastry joined #gluster
16:11 emitor joined #gluster
16:16 rotbeard joined #gluster
16:21 emitor Hi, I had a gluster volume replica 2 configured, and i had to reinstall one of the host that were part of the replica. How could I attach the reinstalled host again to the volume?
16:23 tryggvil joined #gluster
16:25 hateya joined #gluster
16:26 tryggvil joined #gluster
16:29 Mo_ joined #gluster
16:33 zerick joined #gluster
16:37 bulde joined #gluster
16:45 ninkotech joined #gluster
16:46 tryggvil joined #gluster
16:47 phox joined #gluster
17:00 zaitcev joined #gluster
17:01 saurabh joined #gluster
17:02 tryggvil joined #gluster
17:06 ndarshan joined #gluster
17:20 ninkotech joined #gluster
17:24 ninkotech_ joined #gluster
17:35 lpabon joined #gluster
17:41 tryggvil joined #gluster
17:51 _pol joined #gluster
17:52 _pol_ joined #gluster
17:54 tryggvil joined #gluster
17:54 failshell joined #gluster
17:59 ninkotech joined #gluster
18:04 _BryanHm_ joined #gluster
18:05 tryggvil joined #gluster
18:08 sjoeboo joined #gluster
18:13 ninkotech joined #gluster
18:21 ninkotech_ joined #gluster
18:24 ninkotech joined #gluster
18:39 JoeJulian ~replace server | emitor
18:39 glusterbot JoeJulian: Error: No factoid matches that key.
18:39 JoeJulian @replace
18:39 glusterbot JoeJulian: (replace [<channel>] <number> <topic>) -- Replaces topic <number> with <topic>.
18:39 JoeJulian @factoid replace
18:39 * JoeJulian grumbles
18:39 JoeJulian @factoids search replace
18:39 glusterbot JoeJulian: 'clean up replace-brick' and 'replace'
18:39 JoeJulian @whatis replace
18:39 glusterbot JoeJulian: Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/nIS6z ... or if replacement server has same
18:39 glusterbot JoeJulian: hostname: http://goo.gl/rem8L
18:39 JoeJulian emitor: ^^
18:40 bennyturns joined #gluster
18:41 JoeJulian klaxa: the self-heal daemon should use the same log level as a client, so volume set $vol diagnostics.client-log-level $LEVEL
18:48 Morpheous joined #gluster
18:48 Morpheous ellos
18:48 Morpheous :)
18:48 Morpheous Hello guys
18:48 Morpheous ladies
18:48 Morpheous :)
18:49 Morpheous do I just ask a question?
18:49 Morpheous :)
18:50 Morpheous doing a small cloud and would like to use glusterfs for our primary storage. Anyone has had some issues?
18:52 vimal joined #gluster
18:53 JoeJulian Assuming the logical "anyone" meaning any in the set of all individuals, yes.
18:53 * JoeJulian and yes, I do take everything literally. ;)
18:53 Morpheous jajajajaja JoeJulian
18:53 Morpheous :D
18:54 Morpheous simple i am launching 2 servers as host to xenserver
18:54 Morpheous virtualize the gluster and make the local storage as nfs
18:55 Morpheous sorry to sound so naive
18:56 JoeJulian Don't worry about naivety, everyone starts there. My issue is that I'm not sure what you're asking.
18:57 Morpheous so
18:57 Morpheous baremetal or vm
18:58 JoeJulian My preference is bare metal, but others have had success with vm.
18:58 Morpheous then VM it is
18:58 Morpheous :D
18:59 JoeJulian excellent
18:59 Morpheous my thought is, i can use amazon vm as a temporary scaleout solution if i need space quickly
19:00 Morpheous joe is 7200rpm really ok?
19:01 JoeJulian Sure. It depends on your needs, of course.
19:01 JoeJulian We have users wholly on amazon.
19:01 Morpheous awesome!
19:02 Morpheous i read somewhere about putting vm/mysql on it is not supported
19:03 JoeJulian I host mysql on glusterfs, and putting vm images on it is quite common.
19:03 Morpheous :)
19:03 Morpheous thanks!
19:04 JoeJulian In fact, some preliminary test results showed putting mysql using innodb that's partitioned across distribute subvolumes in gluster actually performed better than hosting it all on a "local" ebs volume.
19:05 Morpheous thats cool
19:05 Morpheous is interconnection through infiniband required?
19:06 JoeJulian no
19:06 Morpheous i have 2- 1 giga lines to storage cabinet
19:06 Morpheous my servers have to be in different floors due to power requirements
19:07 Morpheous 16blades for processing interconnected by 2 1giga lines
19:08 dbruhn joined #gluster
19:08 Morpheous sorry, measure twice cut once. I am going to launch glusterfs this weekend, but needed some actual info
19:08 JoeJulian Just use ,,(Joe's performance metric)
19:08 glusterbot nobody complains.
19:08 Morpheous joes performance metric?
19:09 JoeJulian (the one that glusterbot just quoted)
19:10 Morpheous ,,(Joe's performance metric)
19:10 glusterbot nobody complains.
19:10 Morpheous (Joe's performance metric)
19:11 Morpheous how do i access bots info?
19:11 Morpheous :
19:11 JoeJulian /msg glusterbot factoids search #gluster *
19:14 dbruhn ugh I have a client mount that won't mount and I can't seem to find anything in an logs
19:15 JoeJulian "Have you tried turning it off and on again."
19:16 JoeJulian fpaste the log
19:16 dbruhn indeed, three times even
19:16 dbruhn which log in particular?
19:16 JoeJulian client
19:16 dbruhn is that the etc-glusterfs log?
19:16 JoeJulian /var/log/glusterfs/{mountpoint with / replaced with -}.log
19:17 diegows_ joined #gluster
19:17 emitor JoeJulian: thanks, I've solved it by copying the UUID from the host that wasn't reinstalled to the one I've reinstalled
19:18 JoeJulian I assume you mean from the peers file... :) cool.
19:18 dbruhn Joe is there a way to use tail with the fpaste command?
19:19 JoeJulian tail $file | fpaste
19:19 dbruhn weird didn't work the first time I tried it
19:19 dbruhn http://paste.fedoraproject.org/49531/82728763
19:19 glusterbot Title: #49531 Fedora Project Pastebin (at paste.fedoraproject.org)
19:20 vpshastry left #gluster
19:20 JoeJulian dbruhn: That's not a mount, that's an existing mount that cannot connect to the first brick on the .9 server.
19:20 emitor JoeJulian: yes
19:21 calum_ joined #gluster
19:22 dbruhn is there a command to see what bricks are up? I know the "gluster peer status" command
19:22 * JoeJulian is too lazy. He even fleetingly considered kicking the other user whose nick starts with "db" so "db[tab]" would give him the nick that he wanted...
19:23 JoeJulian dbruhn: gluster volume status
19:29 dbruhn lol
19:29 dbruhn I am having a day like that too
19:36 ninkotech_ joined #gluster
19:37 Morpheous lol joe, bot is not returning or giving any info
19:37 tryggvil joined #gluster
19:37 Morpheous :|
19:37 ninkotech joined #gluster
19:38 dbruhn Morpheous, the bot it telling you "nobody complains"
19:38 Morpheous lol dbruhn
19:38 Morpheous so thats his answer?
19:38 Morpheous xD
19:38 dbruhn hahah yep
19:38 dbruhn you want to know about performance?
19:38 dbruhn what are you trying to figure out?
19:39 Morpheous im trying to plan ahead
19:39 dbruhn there is no magic to performance with gluster, it's completely a product of what you put under the software
19:39 dbruhn so, if you need fast, you need fast low latency networking such as infiniband or 10gb
19:39 Morpheous i know im going to get "oh crap should have done it like this" but im trying to do best planning at the datacenter and local install
19:40 dbruhn and you need spindles or ssd to match
19:40 Morpheous what applications would ssd and 10gb be considered?
19:41 Morpheous thanks dbruhn makes sense
19:43 failshell joined #gluster
19:56 dbruhn Morpheous, sorry stepped away for a sec
19:56 Morpheous no worries, happens
19:57 dbruhn I am personally using 40gb Infiniband, and 15k SAS on my faster system, but I need to produce about 700 random read IOPS for every 1TB of data I store on that system
19:58 dbruhn what I am really getting at is, what are you using the system for? and what kind of performance are you expecting
19:59 dbruhn JoeJulian, are all of the files in the /var/lib/glusterd/vols/(volume name) the same between all the peers?
19:59 Morpheous im trying to build a private cloud to host a social media app
20:00 JoeJulian dbruhn: They should be.
20:00 dbruhn so what are you storing on the storage?
20:00 Morpheous pictures, video, audio
20:00 Morpheous database
20:00 Morpheous and i am barely starting
20:01 Morpheous and i am barely starting with the infrastructure so there are so many options
20:01 JoeJulian I would break some of those into multiple volumes.
20:01 dbruhn ok, so you are talking about building storage silo's for your backend
20:01 dbruhn yeah for sure
20:01 dbruhn you need to build your performance around the needs of the media types
20:01 dbruhn also, gluster is not the right thing for running your database from
20:02 Morpheous pictures one volume, audio one volume, video one volume
20:02 Morpheous dbruhn i was asking that earlier
20:02 Morpheous dbruhn i was asking that earlier about db
20:02 JoeJulian dbruhn: It actually might be... Some of my early tests got better performance from dht sharded innodb.
20:03 dbruhn Really? good to know
20:03 dbruhn Morpheous, how big do you expect your database to get?
20:04 Morpheous its a global social app
20:04 Morpheous mobiles, webapp, etc
20:04 Morpheous :( issue is, we wont do amazon because we want to control growth
20:05 Morpheous so we are laying the private cloud first
20:05 dbruhn I use apache cloudstack quite a bit
20:05 Morpheous thats what i will be using dbruhn
20:05 Morpheous i want to make gluster our primary storage for cloudstack
20:05 JoeJulian I'm still trying to get Colin Charles to stay still long enough to help me further prove the database performance.
20:06 dbruhn Morpheous, gluster 3.4.1 now have QEMU support, JohnMark is putting together a list of people trying to do what you are talking about doing
20:06 dbruhn you can obviously already use it as a shared mount point in cloudstack today
20:07 Morpheous awesome
20:07 Morpheous im so freaking nervouse about this
20:07 Morpheous :D
20:07 dbruhn my biggest gripe is that 3.4.1 breaks RDMA, and in my personal opinion is that RDMA is key to the performance characteristics needed for disk image storage
20:07 Morpheous dbruhn im planning on xenserver and vm to glusterfs as virtual machines
20:08 JoeJulian Morpheous: Are you using decent configuration management? ,,(puppet-gluster) might be a godsend if your app really takes off and you need to expand quickly.
20:08 glusterbot JoeJulian: Error: No factoid matches that key.
20:08 JoeJulian @puppet
20:08 glusterbot JoeJulian: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
20:09 Morpheous i will be now
20:09 Morpheous :D
20:09 Morpheous guys im a novice in the lowest definition of gluster
20:09 dbruhn morphemes, interesting take on it. I am actually trying to build gluster as part of my virtual machine clusters in cloudstack
20:09 dbruhn using kvm
20:09 Morpheous awesome! dbruhn
20:10 dbruhn i am still using 3.3.2 though, so qemu support isn't there yet
20:10 purpleidea Morpheous: hey
20:10 Morpheous im starting so i possibly go with the bleeding edge
20:10 dbruhn gluster is easy to setup, there are some in's an out's to working with it when it's misbehaving, but it's nice.
20:10 Morpheous hey purpleidea
20:10 dbruhn cloudstack is far more complex than gluster
20:10 dbruhn ceph is another good option for cloudstack
20:11 dbruhn not that I should steer you away
20:11 failshell joined #gluster
20:11 Morpheous hahahaha
20:11 Morpheous no worries, im in ultra research mode
20:12 Morpheous me and my big mouth, thought of just using amazon, but because we dont know how we will grow, we have to control flow
20:12 Morpheous dbruhn how do you like kvm vs xenserver?
20:13 dbruhn KVM is easy and well supported, so is Xen both stay about comparable. I liked KVM for the fact that I could run larger clusters with it, and seemed to be a bit more flexible
20:13 dbruhn Xen just went free, so I am sure adoption will be nice
20:14 dbruhn if you are going down the path of cloudstack and xen, you can get great support from citrix if you want to pay for it
20:14 AndreyGrebenniko joined #gluster
20:14 dbruhn they will also support kvm on it as well though
20:14 Morpheous thats good to know because we have about 300 vms on xen now
20:16 purpleidea Morpheous: sorry, afk for a moment... there's some new magic elastic puppet-gluster stuff i should be landing soon
20:16 Morpheous awesome
20:17 purpleidea JoeJulian: btw, thanks for calling puppet-gluster a godsend. i'm officially flattered.
20:17 Morpheous :)
20:17 purpleidea Morpheous: did you see: https://github.com/purpleidea/puppet-gluster/b​lob/master/examples/gluster-simple-example.pp ?
20:17 glusterbot <http://goo.gl/5mxOIJ> (at github.com)
20:19 andreask joined #gluster
20:30 compbio_ /lastlog seagate
20:30 purpleidea Morpheous: well, anyways, that's a simple way to get a test cluster going. In the future you should be able to just to gluster::elastic, it will automatically grow/shrink your cluster. ;) more info to come.
20:31 Morpheous awesome purpleidea
20:43 phox hey so.... is there a way to FORCE gluster to regenerate all of its metadata files in /.glusterfs so it stops intermittently telling me files don't exist when they obviously do?
20:43 phox does selfheal or whatever accomplish that on a single-brick volume?
20:44 DocGreen joined #gluster
20:44 DocGreen hi everyone
20:49 Morpheous hello DocGreen
20:51 DocGreen i have a short question on replicated setups. is ist right, that gluster syncs changes which happen directly to the disk? say i have a brick which uses /mnt/brick1 as a data folder for the volume and i mount the gluster volume to /mnt/glustermount. in version 3.2 direct changes on harddisk level in /mnt/brick1 did not get synced. of course changes in /mnt/glustermount were synced accross all bricks.
20:52 DocGreen i just encountered, that changes to /mnt/brick1 are being synced to all other nodes/bricks
20:53 emitor no it doesn't
20:53 DocGreen mmmh
20:54 emitor the changes must be made from the fuse interface to be syncronyzed
20:54 DocGreen so why does it sync then
20:54 DocGreen i am running 3.3 now by the way
20:54 dbruhn the synchronization happens on the fuse clients write
20:55 dbruhn the self heal will correct a missing file on a replication pair, but it would probably need it's info in the .gluster directory to be sync'ed
20:55 emitor if you write directly on the disk it doesn't sync, you must write across the fuse client, or nfs, or cif
20:55 dbruhn which wouldn't be created if it didn't go through the mount point for the volume
20:56 dbruhn you shouldn't be modifying the data on the bricks directly either, what are you trying to accomplish?
20:56 DocGreen yeah thats what i thought, but why does it sync now
20:57 DocGreen this is a home setup, i am replicating data from a mini server, to a desktop machine, it is basically a backup solution in a way
20:58 DocGreen and i used to write things directly to the disks if i needed higher throughput
20:59 elyograg DocGreen: backup through replication doesn't protect against human or computer error.  that is to say, if you or a program deletes a file, the deletion gets replicated too.
21:02 DocGreen i know, it is not an intended use for gluster. and it is not for critical data, for that i have other backup solutions. i made this setup because it fits perfectly between data safety, costs and an always running server
21:07 DocGreen the main idea was to have a tiny server running with as little power consumption as possible and to have a "backup" of the data for harddisk failure on the little server. the desktop is more powerful and of course uses more power. whenever the desktop is turned on gluster takes care of copying new files etc... and this perfectly, but for some reason gluster starts now syncing files which were created directly on the disk
21:09 DocGreen and the changes definetly did not go through the fuse-client... thats why i am confused
21:10 JoeJulian When you use a system in a way that is not supported, you have by definition unpredictable behavior.
21:10 DocGreen yeah right, so this is not a feature of 3.3 somehow?
21:11 JoeJulian nope
21:11 DocGreen strange
21:12 DocGreen i would have liked that feature for my purposes
21:14 JoeJulian There are a number of reasons not to design for that behavior.
21:16 JoeJulian It's be like editing a filesystem image from dom0 for a running VM.
21:16 DocGreen yes and i am aware of that, that is why i am confused, i couldnt think of that as a feature which is turned on by default
21:18 JoeJulian I could expand at length as to why you're seeing that behavior, but since that's not something that I (or anyone else in here) are willing to support, I feel like it would kind-of be a waste of time.
21:19 JoeJulian I hope you understand my apprehension.
21:19 nueces joined #gluster
21:22 DocGreen yes, dont worry. but to sum up: it is not totally erratic behaviour of gluster. to be on the safe side i should move the data folder to a new folder on the disk, so i can still access the disk directly in a seperate folder without accidently compromising data integrity or any other strange behaviour of gluster
21:23 dbruhn http://ur1.ca/fxucd
21:23 glusterbot Title: #49567 Fedora Project Pastebin (at ur1.ca)
21:23 dbruhn Joe, I am still fighting with this brick, I went in an replaced the volume files with a known good config and am getting this now
21:24 JoeJulian dbruhn: Man... you are not having good luck this week...
21:24 dbruhn You could say that
21:24 JoeJulian Try "glusterd --debug"
21:28 dbruhn http://fpaste.org/49569/82736532/
21:28 glusterbot Title: #49569 Fedora Project Pastebin (at fpaste.org)
21:30 JoeJulian dbruhn: So it's in the peer processing. Look in the etc-glusterfs-glusterd.vol.log on ENTSNV04005EP
21:31 JoeJulian My guess is (based on your earlier problems) that ENTSNV04005EP has the wrong uuid for this host.
21:31 dbruhn checking
21:33 calum_ joined #gluster
21:35 calum_ Im sure I have asked this before but im not sure if I got an answer. What is the recommended way of running gluster.... as virtual machines or on bare metal?  also if the vm option is used should virtual disks be considered or.... well im not sure really... help please?
21:35 JoeJulian you won't like my answer... ;)
21:35 JoeJulian Depends on your use case.
21:36 hflai joined #gluster
21:39 Morpheous left #gluster
21:41 tryggvil joined #gluster
21:49 calum_ joined #gluster
21:49 calum_ arrgh internet crashed. did ayone answer my question?
21:56 JoeJulian https://botbot.me/freenode/gluster/
21:56 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
21:56 JoeJulian But yes... I did, and I mentioned that you wouldn't like my answer...
21:56 JoeJulian My answer: Depends on your use case.
22:01 calum_ JoeJulian: ok.... I need scale-able storage for a client that does property surveys. they currently use about 3gb a day and this is stored on a windows sbs server. File access is low as lots of people access the server at once and disk activity is really high. I am hoping gluster will solve the problems of scaleability (they are looking to increase 10 fold in size shortly) and access speed whils also having failover capabilit
22:01 calum_ y. in this use case what would the advice be?
22:03 phox the internet crashed?
22:04 calum_ phox: ok... maybe abit over the top. the correct statement would be my connection went down
22:04 phox :)
22:04 phox :D
22:07 ctrianta joined #gluster
22:10 JoeJulian bbiab... someone just crashed their car into the DSLAM outside one of our locations... The car caught fire so I don't expect us to get service back there very soon. I need to make other arrangements.
22:10 Remco o.O
22:16 calum_ JoeJulian: oops
22:31 tryggvil joined #gluster
23:16 _pol joined #gluster
23:29 DV joined #gluster
23:34 dneary joined #gluster
23:56 _pol joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary