Closed Bug 256890 Opened 20 years ago Closed 17 years ago

Provide a way to tell calendar to reload a calendar file

Categories

(Calendar :: General, defect)

x86
All
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED DUPLICATE of bug 329570

People

(Reporter: mostafah, Unassigned)

Details

Attachments

(2 files)

In some situations, it is desired to instruct the calendar to reload a specific
file. Examples are, manual modification of the ics files and modifications
through automated scripts.
This bug is to implement this feature.
One way to do this is to expand the current functionality of checkCalendarURL in
calendarManager.js to deal with filenames passed to the calendar application as
an argument. The idea is to have the calendar refresh a file when the path to
that files is passed as an argument.
This patch expands the calendarManager.checkCalendarURL function to consider
local file paths in a URI format.
Attachment #157201 - Attachment description: Patch to add the ability to accept local file paths as arguments to calendar → Patch to add the ability to accept local file paths as arguments to calendar(checked in)
With this change:
http://bonsai.mozilla.org/cvsquery.cgi?treeid=default&module=all&branch=HEAD&branchtype=match&dir=mozilla%2Fcalendar%2Fresources%2Fcontent&file=calendarManager.js&filetype=match&who=&whotype=match&sortby=Date&date=hours&hours=2&mindate=&maxdate=&cvsroot=%2Fcvsroot

filepath arguments passed to calendarManager will either be subscribed to as a
new calendar or parsed and displayed in an addEvent dialog.
If the file is already subscribed to, it should reload that calendar.
A commonly requested feature is to reload the calendar if the timestamp changes
( probably by modification from outside ). This might demand its own bug , but
there's no harm in mentioning it here
IHMO the calendar should be reloaded: 

(1) When the mtime has changed

(2) Just before the "edit event" dialog is shown

(3) After edits are confirmed (after the "ok" button on "edit event" dialog is
pressed) but before changes are saved to disk

In addition to the points outlined in the previous comments, (1)+(2)+(3) would
allow to create a relatively safe and very simple calendar "server" on small
LANs (a VERY common case) while minimizing problems with concurrent edits and
locking.... 

(1) is obvious. ...open the .ics file in r/o mode and reload...

(2) would avoid missing previous changes to the event being edited... again...
open in r/o mode... 

Comparing (2) and (3) you might detect if the same event has been edited by
someone else since the "edit event" dialog was opened. 

(3) in this case, and only in this case, open the ics file in r/w mode, thus,
hopefuly, forcing a lock). (3) will "reduce" the window of opportunity for file
locking problems to the time between the "ok" button is pressed and the time the
ics file is actually saved. Furthermore, using the ics file opened in (3) and
replacing only the text of the edited event and then saving it on disk one will
avoid overwriting changes to other events made by other users since (2). 

This makes an effective poor-man calendar "server"... Very simple to
configure... Just point each client to the same .ics file which sits on a shared
folder, each calendar client will use it exactly as if it was a local file... No
need for separate local/remote files with syncing/uploading... No additional
options... No additional calendar servers... No webdav... 

Arguably not a bullet-proof solution but good enough for small networks...
Locking is performed at file level by the file system. The file is opened in r/o
mode for most of the time, in r/w mode for very short times (after ok is
pressed, and up to the time the file is saved on disk). Some simple additional
locking mechanism (e.g. using a file as flag but ignoring+killing the flag if
older than X seconds) might improve (not solve) potential locking issues. 

Obviously this would not replace the use of remote calendars on webdav/ftp
folders, nor  will it change the way local non-shared calendars are used (there
will be more refreshes in the background but the end user would not notice)...
It will only provide a simple alternative for people with a small network
looking for a simple poor-man calendar server... ...even if not a bullet-proof
one...  

Since this bug is about "reloads", and since (1) has been already mentioned...
please consider also (2)+(3)... And the little bits of additional logic... It is
not much more work but it could make a big difference to many people...
Just to clarify... Once there is a "refreshcalendar()" method it is easy to go
from there to a poor-man calendar _server_

You only need to call the function before showing the "edit event" dialog and
before merging and saving the new edits...

And obviously at regular interval....

That is (almost) it.... And you have a functional calendar server....

There are some minor implementation issues...

A) The "refreshcalendar()" method should check if the ics file mtime is newer
than the latest stored mtime for that calendar. If that is not the case it
should just exit.  

B) When refreshing before/after edits (2,3 above) it might happen than the event
edited is no longer present in the ics file (because another user deleted it
since the last reload)... Check for this event and take appropriate actions.

C) For (2) it is not really necessary to refresh the calendar gui... In fact it
would be sufficient to only parse the individual event being edited... just in
case it had been modified since the last reload (1)... (2) might also be skipped
altogether to save bandwith... 

D) You need to check if the edited event had been modified externally between
the last upload (1 or 2) and the "ok" press (3). Which means storing the event
as a string before the user edits and compare it to the same event (if there is
still one) in the ics file loaded before the save (3).

E) Some more coding for an additional locking mechanism??? This is completely
optional and could be done later on....

F) The same procedure (1 + 2??? + 3) should be used also for remote calendars on
webdav/ftp, since they will also be used as poor-man calendar servers and the
functionality should be similar... in fact it is probably very similar to what
is already used for remote calendar once you select "publish changes
automatically"... The "refreshcalendar()" method should be the same for
local/remote calendars... On the same topic: there is little sense in having
"Refresh Remote Calendars" in the context menu... Why not just "Refresh Calendars"?



A calendar like that will scale very badly. saving a calendar is already slow,
and i think loading also. Doing it a couple of times for every edit will grind
calendar to a halt when there a lots of events involved. (order of a couple of
hunderds)
That is what you already do for calendars on FTP and Webdav... If it is good
over the internet why can't you do something similar over a LAN???? 

As mentioned you really need to wait for ONE download/read operation (3) and
ONLY IF the ics file changed since the last download/read... (1) is done in
background once in a while... and only if mtime>mtime_old... (2) is optional and
you can speed it up by only extracting the text for the edited event with a
simple text parsing function to return the event as a string (that is all you
really need and suych a function would be VERY fast), obviously also in this
case you only execute such function if mtime>mtime_old... The upload/write
operation after (3) is in background and it MUST be done anyway... 

If you do not use a shared calendar mtime will NEVER be greater than mtim_old
and the performance would be IDENTICAL in this case... So, what do you really
loose???
webdav and ftp also don't scale :)
What you loose is simple code and an usable interface. Those are quite important...

You want to add a few more checks for changed mtime. The check isn't that
expensive. But if it has changed, the expensive part comes. You can't easily
look for a single changed part. The order might change etc. Also, you can just
store a string for the same reason. The order of items might change.

(but in my opinion the whole lots-of-stuff-in-one-big-file just doesn't scale,
so maybe i'm biased on this.)

All those extra checks and things add up to quite some code. And then you have a
poor-mans solutions. Why not have a real poor mans solution, and only check
every n minutes, and for others have something that work, like a calendaring
server or caldav.
If you do not want a shared calendar my changes do not affect you.

If you want a shared calendar AND you know that the calendar has been changed
externally, then you MUST check what changes have been made by others... If you
don't, then you do not end up with a shared calendar... you do end up with a
complete mess!!!

There is ONLY ONE reason to ignore other people changes with shared calendars...
If you are working offline and you intend to sync later on... BUT at the moment
you can NOT sync... ...You can upload/dowload a calendar which is quite a
different thing... In fact, without proper syncing, having 2 ics files (local
and remote) does not make any sense AT ALL... it is MUCH worse than having only
one file (the remote one)... it is dangerous because it makes it very easy to
potentially destroy other people edits not to mention the entire calendar...
...and it is confusing for the end user... 

Even IF you could sync, ignoring the checks for the sake of performance would
not be a _better_ choice, it would be a DIFFERENT choice, and it would address a
different need (offline calendaring as opposed to online calendaring).

All I am proposing is a VERY SIMPLE hack for the developers (similar to what has
been done already with remote calendars) that makes it possible to create a
decent (not perfect... decent...) shared calendar with very little from the end
users (actually ZERO effort). Moreover it does NOT affect performance for people
that only want a local personal calendar. Which means that you are free NOT to
use the poor-man shared calendar if you do not like it.... 

Could you have a _better_ solution? Yes... BUT that would require a proper
backend... i.e. a LOT of effort on the SB developers AND on the backend
developers AND on the end users that will have to set up a calendar server.
Hopefully that will come sooner rather than later. In the meantime what do we do? 

Moreover even IF you had such a _better_ solution, there would still be people
that might prefer the poor-man shared calendar because it would be MUCH simpler
to set-up... 
(In reply to comment #10)
> If you do not want a shared calendar my changes do not affect you.

Yes, they do. They add more code for me to read. (as a developer)
> In fact, without proper syncing, having 2 ics files (local
> and remote) does not make any sense AT ALL... it is MUCH worse than having
> only one file (the remote one)... 

But the currenct architecture doesn't allow that. It uses libical, which afaik
can only read local files. So you end up with a local file anyway.
That's why you have a local path and a remote path in the UI. We should fix that
first.
You need some clear indication if a file is stricly owned by mozilla calendar,
or can be modified from external sources. Maybe renaming 'remote' to 'shared'
will help.
I think the path to the local file can be removed from the UI. If stricly
mozilla, the user doesn't care where it is. If he does care, it is not strictly
mozilla, and he should provide a path.


> Hopefully that will come sooner rather than later. In the meantime what do we do? 

Work on it? Working on other stuff will only make the good stuff come later.
>"You can't easily look for a single changed part." 

Yes you can!!!! 

First let me explain why we do all those checks. The idea is that the poor-man
can only do file level locking. But you do NOT lock the ics file since you start
editing it, you only lock it once you confirm the edits, so it is a MUCH SHORTER
lock timewise, which is much safer should the file system have problems with
locks. But to do so you need to be careful not to overwrite external changes.
Therefore the checks. 

In (2), you only need to see if the SINGLE event being edited has been changed... 

(2)A: between the last reload (1) and the time you open the "edit event" dialog
(2), so that you can see the changes in the "edit event" dialog; (OPTIONAL).

(2)B: between the time you open the dialog (2) and the time you press ok (3), so
that you do not inadvertently override othe people edits on the same event.
(REQUIRED).

NOTE1 you are only checking ONE event in this case. Both operations A and B
require extracting that event from the ics file as a simple string and do string
comparison... You do NOT need libcal for that... You only need libcal if A=true
in order to transform a SINGLE event from a string into an object... You need a
function "string = extractEventAsString(FilePath,EventId)"... It has to quickly
scan the text file and see if there is EventID, and if there is it finds the
delimiters (BEGINEVENT and ENDEVENT) and extracts everything in between as a
string. This is a fast function If you do not want to bother you can just use
libcal and parse everything into event objects and use those... 

NOTE2 ALTERNATIVELY You might skip (2)A alltogether and use the event as
recorded in latest reload (1) for the B check... So NO (2) AT ALL... No need for
extractEventAsString()... 

In (3), you do a different check (REQUIRED)... You want to avoid overwriting
changes made by other users to OTHER events.  External changes to the same event
are taken care of above. In fact you do not even need to know what other events
have been changed... you might just replace the edited event using simple string
replacement... again, without touching libcal... (in which case you will have to
wait for an automatic refresh (1) in order to see the changes other people made)...
 
In order to find the event you are looking for, you need to parse the whole
file. (or a part of it, until you find it). You can't just grab a substring of
the file, you need to parse the file into blocks of events, check the ID of the
event. When you found the right ID, you need to parse the eventblock into parts,
because the order of the data in the event can have been changed without the
actual data being changed. Luckily, you don't need to compare every field,
because there is a last-modified field. But you still need to find that.

You can't just scan the file for the ID string, because that ID can be in some
other field, in some data text or something else.
> Yes, they do. They add more code for me to read. (as a developer)

This is not a valid argument... Any feature will add some code... Then you will
have to drop ALL features... The point is that it adds VERY LITTLE CODE and it
will provide GOOD functionality, more than enough for SEVERAL users. I mean once
the reloadCalendar() is implemented with automatic reloads as mentioned above... 

In its simplest form. 

A) Add a call to "realoadCalendar()" when you press ok in "edit event" dialog
and before saving. -> 1 line of code

B) Check if the SINGLE event has changed between (3) and (1) and in case issue a
warning -> very few lines... 

C) Merge the new event into the latest uploaded calendar (3) just before saving
-> a couple of lines??? 

THAT IS IT!!! The rest is absolutely optional!!! 

Moreover, since reloadCalendar() should simply exit if mtime>mtime_old there is
no performance hit for non shared calendars...  
  
>But the currenct architecture doesn't allow that. It uses libical, which afaik
can only read local files. So you end up with a local file anyway.

The dicotomy "local/remote" file is not needed without syncing. If you need
storage between reloads either you use memory  OR a hidden temp file... The user
does not need to know... He should NOT have to fill 2 boxes (local and
remote)... He should NOT have to press refresh/upload... Either you provide
proper syncing (with ONE sync command) or you present ONE calendar. The
refresh/upload mechanism is a recipe for disaster, because people will try to
use in order to simulate syncing... and that is NOT a good idea...

>Maybe renaming 'remote' to 'shared'
It won't... 

>I think the path to the local file can be removed from the UI.
YES!!!

>Work on it? 
That's an idea. Another idea is to write a few lines to provide GOOD, SIMPLE AND
TRANSPARENT functionality which people might want NOW and might want TOMORROW...
Even after the "proper" calendar server is in place... 
>You can't just scan the file for the ID string, because that ID can be in some
other field, in some data text or something else.

What are the chances of having a meeting with "Mr UID:
7c622888-fcc1-11d8-948f-e939c0c5f171"??? Where
7c622888-fcc1-11d8-948f-e939c0c5f171 is at the same time the UID you were
looking for and the name of the person you are meeting.... :)

If that scares you... then what about refresh/uploads??? Statistically it is
like comparing being hit by brick in the countryside to crossing Madison Ave
blindfolded.....

IF that is a worry you may add some additional sanity checks once you have a
match... Not a big issue... For instance parsing the single extracted event with
libcal and checking what is the UID... 

Moreover note that this text function is optional... 
It may sound like a few lines of code to you, but it isn't. Simple alerting the
user about a changed remote calendar takes like 10 lines. Then doing something
with the reply is another 10. etc. It all adds up.

Writing a stable app isn't about guessing that something won't happen. It's
about knowing what will happen, and be prepared for the worst. So guesing that
some string will only show as the ID of the event you are looking for is not
writing stable software. Ignoring locking is not writing stable software.

Now, all my messing around here is not because i think sharing a file is evil
and should never be implemented. It's because i  think the right architectre
should be created first. Then on that architecture we can write code to support
local files, remote files, caldav calendars etc. And that's why i'm thinking and
working on that architecture.
>It may sound like a few lines of code to you, but it isn't. Simple alerting the
user about a changed remote calendar takes like 10 lines. 

Why? Isn't there something like: 
if alert( getlocalizationfor(overwrite_external_changes_?), Allowanswers_yes_no
) == yes then save(EventObject) else exit.... 

What is wrong with that???

>Writing a stable app isn't about guessing that something won't happen. It;s
about knowing what will happen, and be prepared for the worst.

That is EXACTLY why you should have all the checks I mentioned... Your argument
is not coherent... On the one said you require "stability" for a simple popup
dialog using several lines of code (???)... At the same time you are ready to
give up improved safety AND USABILITY of the ACTUAL remote calendar by avoiding
obvious checks against concurrent edits because "they add more code"... or even
worse because of alleged performance problems... so we allow people to overwrite
each other events... Nice idea!!! Try to agree with yourself first.... 

> So guesing that
some string will only show as the ID of the event you are looking for is not
writing stable software. Ignoring locking is not writing stable software.

I think you do not see the problem. Let's see if we understand each other: THE
CURRENT IMPLEMENTATION OF THE REMOTE CALENDAR IS ****!!! 

Just to make an example one employee might upload by mistake an empty calendar
and destroy ALL the appointsments for every body else on the network... You will
be left with out of sync individual local files which would be impossible to
merge together again. Even worse... The other users might decide to refresh...
so destroying their local copy as well... Sorry if I am abrupt but that IS the
case... My suggested implementation is WAY SAFER as far as concurrent edits are
concerned, a situtation like the one above is IMPOSSIBLE because before upload,
you must merge the individual event within the downloaded calendar. Not just
that, there is no danger of permanently off-sync calendars, a solution that
comes much closer to a proper ONLINE shared calendar. It UNIFIES and SIMPLIFIES
the interface for the end user: ONE box for the calendar location, no misterious
double calendar, no manual refreshes, no manual uploads, no "publish calendar
automatically" checkbox, no "subscribe to remote calendar", no nonsense... 

ONE box whether the calendar is on webdav, on ftp, on a shared folder or a
simple local file. Wethere the calendar is shared or not... The checks are
exactly the same. There is NO loss of performance for non shared calendars and
for shared calendar you do performe download/reads ONLY after external changes
which is what you should do anyway if you really care about "safety". IF and
when there is proper syncing and/or proper 
calendar server, then and only then, you can ADD (not replace, add) more options
for storing your events...

As for locking it would greatly reduce problems by minimizing the locking period
to few milliseconds (for a lan) as opposed to start locking the full file since
the edit starts... Not to mention that there are other easy tricks outlined
above to improve this aspect as well....  Furthermore nothing forbids to use
webdav on top of it for the locking... In MOST CASES this would be more than
adequate, not bullet proof but perfectly acceptable... 

> It's because i  think the right architectre
should be created first. 

If an alert message requires 10 lines, the _right_ architecture would require
thousands of lines... And several bugs... And you worry about an alert message.... 

While we wait for illuminated developers to create the _right_ architecture
normal people will be loosing appointments NOW in mid air because this is what
happens when you use simple download/upload mechanism at file level to replace
record level syncing... A situation which could be easily avoided with the hacks
mentioned....

Furthermore the _right_ architecture requires end user to setup and maintain a
backend... And many users might want to trade off some of the _rightness_ for
simplicity... Which does NOT mean that the _right_ application should not be
written... It simply means that you can provide NOW A MUCH SAFER WORKING
ALTERNATIVE with little effort. 
(In reply to comment #17)
> if alert( getlocalizationfor(overwrite_external_changes_?), Allowanswers_yes_no
> ) == yes then save(EventObject) else exit.... 
> 
> What is wrong with that???

It doesn't use the prompt service, so will look wrong on mac.

> I think you do not see the problem. Let's see if we understand each other: THE
> CURRENT IMPLEMENTATION OF THE REMOTE CALENDAR IS CRAP!!! 

The way it is coded, and they UI leaves thing to be desired. As a remote
calendar, it works. Sharing between multiple users at the same time doesn't work
currently.
I would like the code and the UI fixed before we add a new feature, sharing.

> It UNIFIES and SIMPLIFIES the interface for the end user:

Let's start with that then. Simplified UI.
> If an alert message requires 10 lines, the _right_ architecture would require
> thousands of lines... And several bugs... And you worry about an alert
message.... 

Nope. The right architecute would be moving code around, so that related code
lives in one file.

(And please limit the use of CAPS, _underlines_ and general verbosage. It makes
the comments hard to read)
I'd like to make it clear that i won't stop anybody from writing the extra
checks etc. (and if i wanted, i couldn't, because i don't own the code) I'm just
saying that it will be somewhat messy, and not just 2 extra lines of code.
>The way it is coded, and they UI leaves thing to be desired. As a remote
calendar, it works. Sharing between multiple users at the same time doesn't work
currently.

What is the point of having a public remote calendar if you can not share it
between multiple users??? Then it must be a r/o calendar for everybody except
the editor, like a static html page... For such tasks you should just use a
wbedav/ftp/scp client in order to send the file to the server, that is why they
invented such instruments... 

Having loca/remote files + refresh/publish commands within the calendar leads
end-users to believe that there are facilities for offline calendars as well as
for shared calendars... In fact I had this clear impression and only after a few
tests it became clear that in fact there is no offline facility (since there is
no sync) nor a shared calendar (since there is no check for concurrent edits).
That is completely misleading... 

...and dangerous... ...if people know what they are doing then they will know
how to publish a file on a remote server without any help... ...but, if people
do not know what they are doing, they should not have easy access to tools that
can potentially screw things up (like accidentally deleting all events in a
remote calendar which other users might want to access), that is poor design.

My suggestions will help reduce problems linked to simultaneous edits and avoid
major disasters like loosing entire calendars. This is a good thing anyway when
you use flatfiles as storage, wherever they reside. In fact even now, nothing
forbids users to point more clients to the same ics file (wether on webdav, on
ftp, or on a shared folder)... If they do so they end up with a mess... Checking
for concurrent edits and keeping relatively fresh data on all clients is a
simple fix which will greatly limit potential damage and reduce the gap to a
proper shared calendar... This should be done even if a real calendar backend
system were available, untill there are people using flatfiles... 

>Let's start with that then. Simplified UI.

That is an excellent idea!!! But IMO the two things go hand in hand... And they
go hand in hand with the reload function (the object of this bug)... In fact
most of the simplification in the GUI should come form: 

A) eliminating the dicothomy between local/remote calendar files
B) eliminate the refresh/publish mechanism

To do either A or B you must read/download a calendar at startup, you might want
the user to choose the reload frequencey, certainly, without proper syncing, you
need an automatic download+check+upload after an event is edited... The
"publish" button is not a safe approach... 

All in all, changes to the GUI should be relatively simple, since mostly they
imply commenting out elements in the XUL file. In particular you should get rid of:

- Add new Calendar dialog >  second Location box and label (useless without sync)

- Add new Calendar dialog >  checkbox and label (changes should be published
automatically... if they are not, without syncing, you are just asking for
trouble... moreover the procedure should always be:
if(mtime>mtime_old,read/download) then do checks then merge_edited_event finally
save/upload... even for local files) 

- Main menu > Tools > Subscribe to remote calendar (user can only "add a
calendar"... no distinction between remote or not)

- Main menu > Tools > Publish selected events 

- context menu > publish entire calendar (this is sadomasochism!) 

- Delete Calendar dialog > Delete Calendar And File (another shortcut to sure
disaster)

- Calendar Options Dialog > - Reload Remote Calendard On Startup (calendars
should always be downloaded at startup... Unless you provide syncing...)

All the above is superflous and confusing, even dangerous...

You might want to add a listbox with the autoreload intervals (with the option
"no autoreloads") to the add event dialog, or you might leave it as an hidden
option to simplify things and just set it to something reasonable (say 15
minutes)...

> Nope. The right architecute would be moving code around, so that related code
lives in one file.

To do proper shared calendars and syncing, you will probably need to allow for
storing the events individually as opposed to storing them in a single flat
file... You need a facility to "talk" to one or more back-ends DB/IMAP/webdav...
You need additional storage e.g. you need to keep a record of events you have
deleted when you were offline in order to sync later on... I am happy with all
of it. But you should also improve the situation for flatfiles as well... They
are not mutually exclusive tasks. 
> I'm just saying that it will be somewhat messy, and not just 2 extra lines of
code.

Alerting people that their edits are conflicting with someone else's changes to
the same event is a must not an option... Making sure that when you change event
A you do not overwrite also C D E F... is a must not an option... 

It does not matter where the ics file resides (locally or remotely, as far as it
is a flat-file), nor does it matter if it is going to be shared or not (no way
for you to know in advance what the end-user is going to do, just be prepared
for the worst and limit the damage). 

As a bonus, if you do that, you will have a chance to simplify the interface... :)
  
(In reply to comment #20)
> What is the point of having a public remote calendar if you can not share it
> between multiple users??? 

Multiple concurrent users doesn't work. Multiple users at different time does work.
A private calendar works
A read only calendar works (like the holiday calendars)


> A) eliminating the dicothomy between local/remote calendar files

Ok. 

> B) eliminate the refresh/publish mechanism

No. You should be able to publish your calendar to some online storage if you
want. Why should it be removed?

> - Add new Calendar dialog >  checkbox and label (changes should be published
> automatically... if they are not, without syncing, you are just asking for
> trouble... moreover the procedure should always be:
> if(mtime>mtime_old,read/download) then do checks then merge_edited_event finally
> save/upload... even for local files) 

Saving is a _slow_ operation. Doing that always, even when the calendar data is
private to mozilla will lead to a slow program. Really, saving should be limited
if not needed. (Or we should fix the slowness somehow, but that is definitly not
just 2 lines of code)

> - Main menu > Tools > Subscribe to remote calendar (user can only "add a
> calendar"... no distinction between remote or not)

Agreed. Always been confusing to me.

> - Main menu > Tools > Publish selected events 

In theory, it can be usefull to save a few of your events to a remote storage.
But in practice, i doubt selecting the events you want isn't easy. Especially if
done often. The user should just create a new calendar :)

> > Nope. The right architecute would be moving code around, so that related code
> lives in one file.
> 
> To do proper shared calendars and syncing, you will probably need to allow for
> storing the events individually as opposed to storing them in a single flat
> file... You need a facility to "talk" to one or more back-ends DB/IMAP/webdav...
> You need additional storage e.g. you need to keep a record of events you have
> deleted when you were offline in order to sync later on...

That is not the architecture. The arcitecture is what allows for stuff like
that, in a clean way.
With this architure i'm thinking of, a developer could write a module for
'calendar-dir' like storage (like maildir, one event per file). Another module
could handle storage on imap (although afaik imap doesn't allow modifing of
messages, another problem to solve in another bug). Yet another would allow
flat-file storage.
offline is another fun thing. I think it should be done in the component doing
the used storage.
>Multiple concurrent users doesn't work. Multiple users at different time does work.

I do not think this statement is correct.

Example 1: I change event A, I have "publish changes automatically" off. After I
do my changes another user edits events A and B, and puts the changes on the
remote calendar. I then decide to upload manually ("publish entire calendar").
The changes of the other user are lost, for both A and B. 

Example 2: If I use "publish selected events" instead of "publish entire
calendar", you might think that A is inserted into the existing calendar...
Wrong... I will not only destroy the other user's events but all the events
except A... Cool!

Example 3: Even if I publish the changes immidiately, the other user does not
know that A has been modified since his last reload, he cannot look at my work,
and will only destroy my changes without warning.

Example 4: Instead of subscribing to a remote calendar I publish from an
existing calendar pointing at the shared one: ALL the chaneges of every user
publishing on that calendar are gone for good. 

Example 5: Delete the "remote calendar and file"... 

Example 6: I can accidentally press "refresh" instead of "publish"... replacing
a new calendar with an old one...

Is this what you call working with "Multiple users at different time"? Well I
don't... We are not talking only about simultaneous edits as in "locking
problems", locking problems are the least of my worries... There are basic
checks missing here... That, contrary to what you say, make the calendar
absolutely useless for multiple users, since is far too easy to have events lost
in mid air... As the last example shows it is dangerous even for a single user
playing with a remote calendar...

I think you might be confusing locking issues with the checks that prevent
overwriting other users' edits. The two concepts coincide if and only if you
start locking (at record level or at file level), before you do the edits,
before you show the "edit event" dialog to be clear. But this is often not a
desirable approach or not even possible, not to mention that you have little
control on how locking works on the back-end... In most cases they are different
and complementary issues, you cannot simply rely on locks, you need additional
checks, such check do not replaceme locks, but minimize potential problmes... 

Furthermore, even if you want to rely on locks alone, you must still
read/download before you edit, so it is not reaally an issue of performance...

There is also some confusion between sharing end doing offline/syncing tasks.
Syncing should be thought as a layer for conflict resolution on top of the
load/save mechanism... Syncing allows to publish your changes later on, all at
once, but it does so because it provides far more checks than in the online case
+ logic + additional bits and pieces! It is not a way to avoid checks, it uses
all the checks that should be in place for a normal shared online calendar plus
additional ones!!!! And while you sync you have all the problems related to
locks and concurrent edits that you normally have with online shared calendars...
    
It goes without saying that you cannot use a simple upload mechanism to replace
syncing, which means you cannot use a simple upload mechanism to provide offline
functinalities, which means no sync -> no offline calendar... 

>A private calendar works

What is a "Private calendar"? How do you enforce a calendar to be private? How
do you know that the file is on a "local" folder which is not on a mount, shared
folder or exported folder? How do you know that multiple users are not pointing
to it? 

> A read only calendar works (like the holiday calendars)

I would hope so... :)

> No. You should be able to publish your calendar to some online storage if you
want. Why should it be removed?

Because it is extremely dangerous, as shown in the examples.... It makes
destroying an entire calendar a very, very, very easy task... For publishing 
you have Filezilla, just to mention one of the two billion tools designed for
this task... 

Moreover "publishing" provides only intermediate functionality... It cannot by
itself provide the functionality of calendar sharing, even less the one of
calendar syncing... In a well designed app, end-users should not be bothered
with intemediate functionality. End-users do not want a tool for finding
conflicts between events, another tool for uploading their records, another tool
for downloading them, one more for marking files that where deleted on the
remote calendar, one for showing diffs.... They cannot be responsible for
applying such tools in the right order to avoid problems...

They should only choose between online and offline mode. In online mode they
should just press "ok" in the "edit event" dialog... Nothing else! Whatever is
the calendar backend.... In offline mode they should press the button "sync"
once in a while.... That is all the complexity an end user should take on his
shoulders... Anything more and you are in for troubles... 

> Saving is a _slow_ operation. Doing that always, even when the calendar data
is private to mozilla will lead to a slow program. 

Who cares if a program is as fast as a rocket or as solid as a rock if the most
important thing, user data, is not safe at all???? 

The concept is simple: no sync -> no offline calendar -> changes must be
immediately saved

Once you have a proper syncing mechanism in place, then and only then, you might
allow people to work in "offline mode", if they so wish.....  

In fact, if you take the examples above, and you turn on the option "Publish
changes automatically" (which enforces a behaviousr very similar to what I
suggested at the beginning) on all clients, things behave much more predictably,
and you have far less problems!!!! This option cannot be off, unless you can
sync! Which is another way of saying: no manual publishing!!! 

> Really, saving should be limited if not needed. 

But it is needed!!!! That's the point I am trying to make! It is not an optional
feature like "It would be so loveeeely to immediately see other people changes",
it is a required feature like: "your f***g application destroyed my records and
I missed an important appointment, thanks a lot you a****s!". There is only one
way to publish events ex-post and that is proper syncing... not uploading...
Which implies even more checks than the ones I mentioned... 

>With this architure i'm thinking of, a developer could write a module for
'calendar-dir' like storage

I agree.... But essentially there are 4 ways to work with calendar back-ends. 

I) records are saved individually -> record-level locks (IMAP/DB/webdav...) 

IA) read/download and lock the individual record before you start editing, then
save/upload

IB) read/download and lock the individual record after you edited it and before
you save it, then check if there where changes to that record since last reload,
then save/upload

II) records are saved in a single file -> file-level locks (webdav/ftp/shared
folder..)

IIA) read/download and lock the file before you start editing, then save/upload

IIB) read/download and lock the file after you edited it and before you save it,
then check if there where changes to that record since last reload, then merge
only the individual edited record, then save/upload (+ optional additional checks)

III You could also upload without any read/download operation, which often means
no lock at all, and no check, but that is pure suicide...

I/II are complementary choices, you should have them both, III is a joke. A/B
are alternatives, you should decide for one of the two. I am arguing in favour
of B, since it does not rely only on locks to avoid overwriting other people
edits. If locks work well it is a plus, if they don't, the problems are greatly
reduced, not eliminated though. This is not only because B performs explicit
checks, but it also uses a far shorter lock timewise, which is a good thing in
itself, particularly with flatfiles... 

Furthermore, note that B is very similar to what has already been implemented,
so little changes should be required (. In fact the situation is not that
desparate. 

Syncing/offline browsing is a layer on top of I/II which provides additional
checks before merging the records. Even though I doubt you can have syncing
without a proper backend (I), for one thing: DTSTAMP (which is used extensively
by the syncing algorithm) should be set by single clock... If you use the
clients clock to determine DTSTAMP, events on machines with fast clocks will
overwrite later changes on machines with slow clocks...  

The distinction between I and II also affects autoreloads (if IA/IB, do we
download all records or only the new ones?). Complex backends (IA/IB)  might
also allow for push configuration (i.e. the backend announces to the clients
that there are new changes).
(In reply to comment #23)
> >Multiple concurrent users doesn't work. Multiple users at different time does
work.
> 
> I do not think this statement is correct.

(snip ways to destroy data)
Sure, you can delete your data if your setup is wrong. If you don't publish data
 stuff will go wrong. If you have one file open at multiple places at the same
time, you are in what i said doesn't work. So, i don't see the point of the
examples.

> in mid air... As the last example shows it is dangerous even for a single user
> playing with a remote calendar...

Should 'delete file' be removed from windows explorer too, because it can
destroy data?


> >A private calendar works
> 
> What is a "Private calendar"? How do you enforce a calendar to be private?

A file only one person uses. And given that a person can be only at one place at
a time, there are no real problems, especially if publish automaticly is on.

> > Saving is a _slow_ operation. Doing that always, even when the calendar data
> is private to mozilla will lead to a slow program. 
> 
> Who cares if a program is as fast as a rocket or as solid as a rock if the most
> important thing, user data, is not safe at all???? 

It is save, if you do the things it can do. If we want to add more features
(sharing a calendar), we should make it not suck from the beginning.


> In fact, if you take the examples above, and you turn on the option "Publish
> changes automatically" (which enforces a behaviousr very similar to what I
> suggested at the beginning) on all clients, things behave much more predictably,
> and you have far less problems!!!! 
So, that's why the checkbox is there. You now say everything works, if you set
it up correctly. What's left to be done?

> > Really, saving should be limited if not needed. 
> 
> But it is needed!!!! That's the point I am trying to make! It is not an optional
> feature like "It would be so loveeeely to immediately see other people changes",

It is a feature. I use calendar without it without any problems. Everybody just
messing with their own data on their own box doesn't need this extra feature.


After all this words, what is what really should done in order to allow for
sharing a calendar file? 
- making 'publish changes automaticly' check for changes before the dialog
shows, and after editing and before saving.
- reloading every N minutes.
- Makeing relead check for mtime changes
Is this list correct?

(again: please be less verbose. Long comments like yours are hard to read and
hard to get the point.)
> Sure, you can delete your data if your setup is wrong. 

In fact at the moment there is only one set-up that is (almost) right: "Publish
changes automatically"... All the other options and configurations are complete
rubbish, and extremely dangerous, until there are safe alternatives (proper
syncing) they should be removed from the interface. One should never be able to
destroy data in a simple way! Particularly data used by others! 

> If you don't publish data stuff will go wrong. 

Exactly!!! So why don't you just publish the data without asking? Since when do
you create a program assuming that the users behave?  

> If you have one file open at multiple places at the same time , you are in
what i said doesn't work. So, i don't see the point of the examples.

Without the "publish changes automatically" there is only one way a "multiuser"
calendar is going to work.... If when one user makes a change all the others
magically refraing from making changes, and after that user ha saved his
changes, all the other users magically decide to refresh before touching
anything, and if they magically avoid pitfalls like deliting calendar + file and
if they magically avoid publishing empty calendars and .... 

So yes it works... if you believe in magic...
    
> Should 'delete file' be removed from windows explorer too, because it can
destroy data?

That is different because you will have to do this manually, with another
application, you will have to find the calendar, it would be intentional. No
need to help people in this noble task...

> A file only one person uses. 
> And given that a person can be only at one place at a time

Again, you are assuming that the user behaves.... 

An example... Let's say a husband, having discovered that in SB he can see other
calendars by pointing at them, has a wonderful idea: "hey darling, just point to
my calendar and you can see what I am doing, cool eh"... Then the wife: "Honey
this SB is really cool, I have inserted your dentist appointment in there from
my laptop, no more stickers on the fridge"... Really, really, cool...
Unfortunately, now all other appointments inserted by the husband since his wife
did a reload are gone...

Another example... Say there is only one user, you, you go from place A to place
B... Say you have left SB open on B and you forget to refresh B importing your
latest changes from A. . You start adding events in B. Now either you loose the
changes of A or the ones of B... or you do manual "syncing" a very fun task...

Or, using a remote calendar, always as a single user, you invert refresh/publish
buttons (it happens).... All the new changes are gone....

All I am saying is: do not assume anything, do the checks always, if the user
"behaves" there is no loss of performance, if he does not than there is a
safenet....

> It is save, if you do the things it can do. 
    
I have been programming for a while myself. The sentence "if you do the things
it can do" referred to the end-user is pure nonsense. An application should
simply not allow someone to do things that are not supposed to be done, full
stop! But here you even put context menu entries to such tools, and the only
reasonable option is off by default!!!! 
    
> So, that's why the checkbox is there. You now say everything works, if you set
it up correctly. What's left to be done?

To begin with, remove the checkbox!!! If that's the only way to make things
work, it should not be an option at all!!!! And remove the "other" ways to do
things (see http://bugzilla.mozilla.org/show_bug.cgi?id=256890#c20 for things to
eliminate from the interface)... Ways dangerous for the data (unless "properly"
used), confusing for the end-user and superfluous, since in their "proper use"
(amongst the several improper ones), they simply replicate existing instruments
(e.g. filezilla).

> I use calendar without it without any problems. 

But you are a developer! You can not just assume other users "know" how to do
things!!! Even M$ programs would work under such an assumption!!!
> After all this words, what is what really should done in order to allow for
> sharing a calendar file? 
> - making 'publish changes automaticly' check for changes before the dialog
> shows, and after editing and before saving.
> - reloading every N minutes.
> - Makeing relead check for mtime changes
> Is this list correct?

Basically YES!

legend: PCA="publish changes automatically"

+ The only important check is to warn users if the event being edited changed
from the last reload to the time you commit the changes. Since PCA already
performs a read/download operation before saving the data both event objects are
already available. That should be easy, compare the two event objects and issue
a warning if necessary.
    
+ The other important piece is to only overwrite a single event. Nothing to do
here!!! It seems that PCA merges the individual edited event into the downloaded
calendar, thus preserving all other events. Which is good! In fact PCA it is
very similar to what I mentioned in my first post (3). 

+ The "check for changes before the dialog shows" is optional. Just skip it for
the time being. It requires one additional read/download operation. 


Other issues that emerged in this discussion:


+ Always apply PCA -> No checkbox. There are only 2 ways of working safely:
online (PCA) and offline (proper syncing); I.e. without syncing there is no safe
alternative to PCA.

+ Treat local and remote calendars the same way -> A) apply PCA also to "local"
calendars. B) Also reload local calendars. This will simplify the code, the GUI,
and it will make things much safer... For simplifying the GUI see
http://bugzilla.mozilla.org/show_bug.cgi?id=256890#c20

+ Always reload all calendars at startup -> no checkbox. A consequence of the
previous points.

+ There seems to be a "builtin" reload function within current implementation of
PCA, but is applied to remote calendars only. Let's not duplicate things, there
should only be one reload() function (also called by PCA) that is applied to
every calendar (local or not) and always checks mtime before doing anything.

+ Reload_every_n_minutes might be an option in prefs.js to begin with. In the
future there should be a per-calendar autoreload option.  Allow also for
NoAutReLoads as one of the possible choices. You can always use "Refresh Remote
Calendars" to reload manually (get rid of the "Remote" word). Oviously, the
Autoreload parameter does not affect reloads at startup and after edits (PCA),
which are compulsory (unless in offline mode, once there is syncing).  

+ Later on you might consider additional optional checks and tricks, but do not
bother now. E.g. using files as flag to implement additional checks for locks,
doing a read operation after each save to make sure the event is actually in the
file and it was not overwritten because of a race condition while saving...
Tricks to improve performance, like using fast text functions instead of parsing
the full ics file into calendar objects...
(In reply to comment #25)
> > Sure, you can delete your data if your setup is wrong. 
> 
> In fact at the moment there is only one set-up that is (almost) right: "Publish
> changes automatically"...

You make it sound like the only way to use calendar is by sharing a remote
calendar file with multiple users at the same time, and that all other ways are
rubbish. Stop doing that!
Not everybody uses it that way. In fact, i never seen a report about someone
loosing data in any of the ways you described.

> So yes it works... if you believe in magic...

No, i don't believe in magic. I believe in what i see. No reports on missing
data. Users using calendar in other ways then you describe.

> An example... Let's say a husband, having discovered that in SB he can see other
> calendars by pointing at them, has a wonderful idea: "hey darling, just point to
> my calendar and you can see what I am doing, cool eh"... 

Sigh. That is a shared calender used by multiple users at the same time! I was
talking about other uses!

<Snip, the same move to one use a few more times>


And bout the things to do, they should be done in seperate bugs. We could do the
core list of three here, and the other things you mentioned later.

> + The only important check is to warn users if the event being edited changed
> from the last reload to the time you commit the changes. Since PCA already
> performs a read/download operation before saving the data both event objects
> are already available. That should be easy, compare the two event objects and
> issue a warning if necessary.

It isn't easy, for reasons i already explained. Currently, the remote file is
downloaded over the local file. Then, a pointer to that file is passed to
libical to parse.
To compare two items, the file needs to be downloaded somewhere else. Then
libical has to parse it. Then the right event has to be found and compared.
Quite some code.
> You make it sound like the only way to use calendar is by sharing a remote
> calendar file with multiple users at the same time, and that all other ways 
> are rubbish. Stop doing that!
> Not everybody uses it that way. In fact, i never seen a report about 
> someone loosing data in any of the ways you described.

I showed with clear examples that no other use is secure. Not even a single user
working on 2 locations off the same file and/or with a local/remote calendar.
Only pure local mode and pure read-only mode work reliably... And, by the way,
the implementation of the read-only calendar depends on the os/filesystem/daemon
hosting the ics file.... Saying that things are secure "if they use them in the
right way" is nonsense....

I do not care about reports, nor should you, you should  do the tests yourself
following my examples, and check if data gets lost or not... When it is easy to
loose data, it is always a big issue, not a matter of "features"... The current
implementation is rubbish, please do not take it personally, I might be abrupt,
but I am trying to be constructive, nobody more than me would love to see the
obvious holes patched up... Which, by the way, is not too difficult. We
certainly wasted way more time and lines in this discussion than what would have
been necessary to fix the mentioned issues... 
    
Not only it is a problem of security, but the inteface becomes needlessly
complicated. This is all you need in a calendar app GUI... 

...for online calendars...

* Ok button on edit_event_dialog/delete_dialog 

...for offline calendars...

[*] Offline mode switch button 
[*] Sync button

Note that the above should be inserted if and only if proper syncing is
available. Note also that the two commands above are completely optional, you
could have offline calendars without the enduser knowing or doing anything at
all about it.... You could keep a hidden local copy for each remote calendar,
overwriting the local copy at each load/save operation, as it happens now
(alternatively, only create a local file if needed: when you turn off SB or you
loose the connection to a remote calendar). Once you turn SB on try to
automatically reconnect and sync... If it is not possible, work in offline mode
using the local file, otherwise work in online mode as usual.... 
   
...This is what you have in the current implementation: 

* Ok button on edit_event_dialog/delete_dialog
* Subscribe to remote calendar
* Publish entire calendar
* Publish selected events
* Refresh Remote Calendars
* Reload Remote Calendars On Startup
* Optional second Location in Calendar Options 
* Publish Changes Automatically

...and there is not even proper offline mode...  

> Sigh. That is a shared calender used by multiple users at the same time! I was
talking about other uses!

And how exactly do you plan to avoid that the husband uses the calendar in such
a way???  I simply showed you with this example that in the real word you cannot
just assume that people are going to work in "local" mode because they say so in
the options.... People have "ideas"... :)
> Currently, the remote file is downloaded over the local file.

Does it check for mtime before downloading/parsing? 

Hmmm.... It looks to me that with PCA you write files three times: locally
immediately after you download from remote location; locally after your changes
are merged; remotely.... There are two issues here... 

1) Whether the underlying libraries (FTP/Webdav/LibcalParser...) require working
with files as opposed with string variables.... Namely, do they allow to
read/write a remote file straight into/from a memory stringbuffer? If that is
the case, you should use memory (1A),  otherwise you must obviously use
intermediate files (1B). 

2) Wether you should use a permanent "local" files for each calendar or only
disposable files as needs require.... Consider that at the moment the calendar
is kept in memory, and after being parsed, the local file is not really used
except when saving... So it is possible not to use permanent local files... This
is an important issue which affects the architecture. There are 2 possible uses
for "local" files I can think of:
 
2A) Offline calendars. Provided that such facility is not available at the
moment (no proper syncing), there are two ways of implementing offline
calendars: 2A1. keeping an offline calendar always available in the background
and always in line with the in-memory calendar (which is basically the current
implementation); 2A2. creating such "local" files automatically from the
in-memory calendar whenever SB is turned off, whenever you cannot extablish a
connection to a remote calendar or when the user manually switches to offline
mode.... The advantage of 2A2 being that, when remote calendars are available,
the local copy is not used at all (it could even be deleted). 

2B) A "local" calendar... But that is deceiving... A "local" file might reside
on a shared/mounted/exported folder.....  So it might in fact be a remote
calendar, no way to know... Best thing is to treat all calendars as potentially
"remote" ones (if they are really local and offline mode is automatic you will
never go in offline mode...). It would avoid problems with misuses, it would
simplify the architecture and the interface... The only real difference is that
in this case you can access the content of the file directly to/from memory
stringbuffer (1A)... (2B) therefore collapses into (2A)...

All in all the relations are: 

2A1 <- 1B=!1A
2A2 <- 1A=!1B

So 2 really depends on 1, and 1 depends on how the libraries are implemented...
Anyway I do not think the user should know about the "local" files... -> Get rid
of one location box in "calendar edit" dialog. 

> To compare two items, the file needs to be downloaded somewhere else. 
    
Why do you need to store it "somewhere else"? 
    
You could keep in memory the old event before you reload the full calendar
(assuming mtime>mtime_old)... This way you can overwrite the local file (if you
need a local file at all...)  Something like...

X=CalendarObject.getEventObject(EventId).serialize2string()
CalendarObject.load()
if exists(CalendarObject, EventId){
    Y=CalendarObject.getEventObject(EventID).serialize2string()
    if X==Y{
        CalendarObject.editEvent(EventID,EditedEventObject)
        CalendarObject.save()
    }else{
        if issueWarning("Event XYZ has been modified"...)
        ....
    }
}else{    
    CalendarObject.addEvent(EditedEventObject)
    CalendarObject.save()
}    
GUI.refresh()

Similarly when deleting an event...

PS Note in the code above you could have worked (load/save) at event level (if
supported by the backend) instead of working at calendar level.... In fact you
might always work at event level with plugins... considering flat-file back-ends
are a plugin family that will do for instance: event.load() -> calendar.load() +
calendar.getEvent() 
The more i think of this, the less i like it.
Your proposals reduce calendar to two use cases: The file is strictly local, or
stricly remote. This is simply not how ical works. For example, read
http://www.dsv.su.se/~petter/calendar/
iCal works like this: You have your own file. You can publish it if you want.
You can also read a published calendar, but it isn't your own. We can't just
remove all those ways of using calendar. We can't just remove the publish function.

For more advanced usages, the ical file format doesn't work anymore. For a
start, you need locking. (how do you do that over http with long roundtrips?)

There are better ways to let someone else add stuff to your calendar. One way is
a calendar server. The other some kind of sync (using emails or files on a server)
I added some related links to bug 122654

A group calendar server or personal calendar should accept changes when 
the user receives an ICS invitation from a remote user asking them to reply 
with ACCEPT/TENATIVE/DECLINE. If the email client or a calendar-IMAP plugin
understands how to do this, it can also look for messages in your inbox 
with ICS responses to your own meeting requests... and store the result.

As long as published free/busy data can to tell you when somebody
is in your meeting... not just busy, you don't really need a calendar server.
The next best thing is when you can keep track of ACCEPT/TENATIVE/DECLINE...
to layer what you know from received ICS files on top of free busy data.
> iCal works like this: You have your own file. You can publish it if you want.

That is dangerous. By simply doing something as innocent as "publishing" you can
overwrite other people changes, or even you own changes to a second location,
you can also "publish" an empty calendar on top of one with thousand of events
without a single warning.... Cool, don't you think? 

You should only provide tools that are safe. Tools that are safe "if properly
used", are simply unsafe... Do not illude yourself... 

"Delete calendar and file" is another favourite of mine... It is like an editor
that says: "Close file and delete it"... A good candidate for the award to the
worst computer invention of the year....

There are only 2 working configurations for a calendar app: online and offline.
All other cases collapse into those two, I repeat all other cases, including the
ones you crave for... 

Online=PCA, Offline=Sync. 

Publish does not fit anywhere, it is only a shortcut thrown in there in the
misconceived hope of providing half-syncing by allowing for manual
download/uploads... But there is no such thing as half-syncing...
half-sync=half-data you should make a poster and put it on top of your bed... 

If you want to be able to "publish" without being online (PCA), start working on
proper sync, beginning with the checks I mentioned, which are essential ones
both for online and offline calendars... 

In the meantime, if you know what you are doing, I am sure you will find
alternative ways to publish and download your beloved calendar files using
external apps, for people that do what they are doing it is better if they do
not have easy access to "publishing" facilities since you can easily screw
things up for themselves and, worst, for other people. 

> You can also read a published calendar, but it isn't your own. 

What do you mean by "it isn't your own"???? 

You can still publish on top of it even if "it isn't you own", and destroy it...
This is one of the problems.... SB does not have any procedure at all to enforce
"ownership", nor there is any procedure to warn you if you are overwriting other
people's events... It is up to the "owner" to take appropriate measures by
setting file/webdav permissions, the "safety" of the data in SB is as bad as any
other app, since it depends entirely on external measures and on user behaviour
(as in "assuming he does things right")... 

It is fair to say that you did not spend a single line of code to protect user
data from accidental mistakes/misuses, not even basic warnings, at the same time
you wrote several lines to provide easy access to tools that can potentially
destroy user data...
did you read the link i gave?

ical works like this. You data is on your computer. It always is. If you want
someone else to read it, you publish a *copy* to an online location. Since it is
a copy, whatever happens to it does not destoy your data.

Now some problems arise. What if you use multiple computers? What if you want
someone else to add event to your calendar?

For the adding events problem, this should be done by exchanging data abuot that
event, using email or something. Not by exchanging the whole calendar.
For multiple computers, i'm not sure yet. Maybe some locking and a
x-last-modified-by or something.

In no other internet application that i know of, collaboration is done by 'just
write to this shared file, and everybody should reload the file often enough'.
It is done by servers, with locking, notifications etc.
> did you read the link i gave?

Yes and it does not change a thing!!!

> ical works like this. 

It "can work" like this... You are coming back again and again on the same
faulty assumption: "if people use it correctly, there is no problem"... But you
did not enforce correct behaviour.... you are assuming correct begaviour...

> You data is on your computer. It always is. If you want
> someone else to read it, you publish a *copy* to an online location. Since it
> is a copy, whatever happens to it does not destoy your data.

Great!!! So users can "only" destroy the public copy.... Now I feel better!!!

And let's make one thing clear: It is easy to destroy all your data, also
locally... All is needed is to press "refresh" after the public copy has been
destroyed, and the local copy is also doomed... 

...to destroy the public copy SB kindly provides saveral easily accessible
weapons... ...just pick one up from the packed menu... Today the chef suggests
"publish selected events"... 

...obviously, all destruction happens without a single warning, not to spoil the
fun... 

After you discover that your data was not as safe as someone claims, you might
resort to external measures, like restricting access to the public calendar
file, making it readonly for everybody except for yourself... Not a "multiuser"
calendar by any stretch of the imagination, but at least you might think your
data is finally safe... 

...Part of it... After spending a bit of time inserting new events, you might
accidentally hit "refresh" instead of "publish"... Guess what happens?  All your
changes are magically gone... Obviously without a single warning... No prob,
just type them all again... 

And we did not even scratch the issues involving real multiuser calendars...

Be honest... Is this acceptable??? 

> Now some problems arise. 

Now??????? Is this a joke?

> What if you use multiple computers? What if you want
> someone else to add event to your calendar?
> For the adding events problem, this should be done by exchanging data abuot >
> that event, using email or something. Not by exchanging the whole calendar.

Let's make another thing clear... The modification suggested were not proposed
simply with the aim to provide shared calendaring. This is a reductive view. 

They are suggestions to prevent accidentally overwriting existing data, whatever
the intended use of the calendar (as shown above, also single user mode off two
locations will benefit). It is true that by implementing such modifications, you
can have a simple, working, poor-man shared-calendar, and even simplify the
interface. But those are welcome spin-offs, the main reason is preserving
user-data. 

Obviously, nothing forbids implementing a proper smart backend that provides the
checks/locks itself... But now there is no backend, and flatfiles are used to
store data.... 

Which does not mean that checks are not necessary at all!!!!! It means that it
is up to the client app to make sure the data is as safe as possible!!! And
until simple flatfiles are allowed, that should be the case! 

Let's get the priorities straight here:

1) User data must be safe
2) nothing
3) nothing
....
998) nothing
999) nothing
1000) Application must be stable
1001) Additional features
1002) Application must be fast
1003) Nice look
......

I think you inverted the list.... 

Actually you made your best to make sure it is easy to loose data... But hope is
not lost...

PCA is a good starting point, not perfect, but it can be improved (in terms of
checks performed and performance enhancing tricks)... Sync can be added with a
bit more work. All the other brilliant inventions should be eleiminated,
immidiately! 

PS If you are careful you can code online/offline facilities so that they can
work either off flatfile backends or off proper calendar servers.... maybe using
plugings as you suggested.
Yes, user data should be safe. No, that should not be done by moving to a whole
different way of using .ics files.
It should be safe, but it is not safe.... And if to make it safe you have to
change how things are done, then so be it.

PCA works reasonably well... And it is there... It is therefore not a matter of
moving things, as much as deleting things.... 
*** Bug 257144 has been marked as a duplicate of this bug. ***
Some thought on the DataStore framework which will address several of the points
touched on the comments above: data integrity, plug-ins, onlin/offline
capabilities... This implies a major rewrite of calendarManager.js...

Observation1: the queries are perfomed on the local client -> the DataStore
backend does not need to be "aware" of the data structures it stores

Observation2: it follows that you can abstract several of the operations, so
that specific implementation is performed at plugin level

Observation3: it follows also that you can use the same framework for various
items, not only collections of events, but also collections of addresses, so the
same framework could be used with Thunderbird as well. -> It would be wise to
split general storage issues from calendar-specific ones...

Objects

DataManager


Sorry, I fired the enter key accidentally... Anyway let's continue... 

For each ItemCollection (i.e. a calendar) you have 3 types of storages, 1
volatile (in-memory objects), 2 non-volatile (offline DataStore / online
DataStore). At any time to you only work with either the online or the offline
DataStore as well as with the in-memory objects (Items+settings).  

Objects:

DataManager //~ calendarManager, high level interface
|refresh()  //refresh all active ItemCollection objects
|ItemCollections //~ array of calendars
  |ItemCollection  //~ calendar
    |name
    |active
    |userSettings //misc user specific settings (e.g. color)
    |offline //boolean specifying whether in offline mode 
    |onlineDataStore //DataStore object
    |offlineDataStore //DataStore object
    |isOffline() //will also test DataStore.connected
    |            //and goOffline if DataStore.connected==false
    |goOffline() //flush;onlineDataStore.disconnect;offline=true
    |goOnline() //if(onlineDataStore.connected){sync;offline=false}
    |DataStore() //(isOnline?onlineDataStore:offlineDataStore)
    |flush() //Items/settings -> offlineDataStore (i.e. save all)
    |sync() //sync Items/settings <-> offlineDataStore + flush
    |refresh() //if active{ with DataStore
    |          //if mTime<pluggedServer.getMTime{
    |          //DataStore -> Items/settings;}}
    |addItem() //modify DataStore and Items
    |editItem() //as above
    |deleteItem() //as above
    |setSettings() //as above
    |getItem() //return in-memory Item with specified id
    |getAllItems() //return all in-memory Items
    |getSettings() //return in-memory settings
    |settings //internal, (e.g. timezone)
    |Items //internal, in-memory array of Item objects
        |add() //internal, called by methods above
        |delete() //internal, called by methods above

Item //~ calendar event
|id
|mTime //online-Server-side mTime for the item
|modifiedOffline //boolean, since offline-server mTime is useless
|RestOfObject // other properties and methods of the Item

DataStore //~ Server
|plugin //as string
|connectionString
|connected() //return boolean, try to connect if not
|            //connected and set PluggedServer
|mTime //server-side mtime as of the last 
|      //add,edit,delete,getAllItems,setSettings
|PluggedServer //methods implemented by pluginis
    |ItemCollection
    |connect()
    |isConnected()
    |getMTime()
    |addItem()
    |editItem()
    |deleteItem()
    |getItem()
    |getAllItems([afterMTime])
    |disconnect()

What to store in DataManager.rdf (~CalendarManager.rdf):
ItemCollection.name
ItemCollection.active
ItemCollection.offline
ItemCollection.userSettings
ItemCollection.onlineDataStore.plugin 
ItemCollection.onlineDataStore.connectionString 
//offline settings are automatically assigned do not show in GUI
ItemCollection.offlineDataStore.plugin 
ItemCollection.offlineDataStore.connectionString 

What to store in DataStore Back-End:
mTime   //time of last change (server-side)
Items
  |Item
    |id
    |mTime
    |itemString //serialized Item object
    |modifiedOffline //boolean, only store on offline DataStore

General settings (serialized as string) can be stored as the first item with
id=0. PluggedServer.XXXItem(0) refers to the settings, not to an actual item.

Internally, at any given time you only work with either the offline or the
online storage, never with both simultaneously. Whenever the app is closed flush
is called. When a connection drops or cannot be established goOffline is called.
If the connection drops while modifying data, after calling goOffline, the
add/edit/delete operation is repeated (so the offlineDataStore is used). Both
the online and offline datastore use plugins to manipulate data, i.e. they
expose the same interface. 

Externally, the rest of the application only calls the ItemCollection high level
methods (editItem/addItem...), which in turn will make changes to both the
in-memory objects (Items/settings) and to the appropriate DataStore. In other
words the rest of the application does not know what DataStore is actually being
used... It is like it was always in "local" mode....

Note that there is no publish method, either changes are reflected immidiately
on the onlineDataStore (~ Publish Changes Automatically), or they are saved on
the offlineDataStore and are then merged with the onlineDataStore when the sync
method is invoked. The checks and locks are implemented at plugin level since
they depend on the actual backend used. Note that the same framework can be used
with different backends (DB/flat-files/FTP/WebDav...) and with different Items
(iCal, vCard). Also note that real multiuser calendar is possible provided
plugins/backends provide appropriate locks/checks.
> Observation1: the queries are perfomed on the local client -> the DataStore
> backend does not need to be "aware" of the data structures it stores

Not true. Calendar servers can perform the queries, like searching for events in
a time-range.

I don't see how your suggestion provides locking, and/or fail-safe use of one
file by multiple clients. Just saying 'there is locking' doesn't do it. You need
to provide a way to lock and/or share a file on a remote file with a high lag.
>Not true. Calendar servers can perform the queries, like searching for events 
> in a time-range.

Absolutely, even though in most cases for calendar/contacts you can work on the
client side, and query the in memory objects directly (as it is now). This is
because the dataset is smallish for typical use (except for large networks), and
most queries only involve the ID. Other queries (on the content of the object)
are less frequent and do not need to be as fast, so I am not sure a dedicated
sql db (Vlad hack excluded) specific for calendar items is justified considering
the coding required both on the client and on the server. You can browse the
in-memory collection of objects for matches... This can greatly simplify the api
for plugins, and you will have eventObject-level access (if the backend supports
it).

Note that there are additional issues since libcal is strictly bound to the
backend... ...While plugins will need to be in between the parser/in-memory
objects and the backend. As mentioned separating storage-specific issues from
application-specific requirements would be beneficial. 

That said, I have just found out about Vlad hack,
http://rig.vlad1.com:2500/vladhack/show/MozillaUnifiedStore ... I do not know if
Vlad is also considering multiuser and offline/syncing issues. I hope he does.
If that's the case we might have a useful backend for the calendar which might
make the points touched in this discussion obsolete. 

> I don't see how your suggestion provides locking

Locking should be implemented by the plugin (e.g. XXXplugin.editItem). Locking
requirements for a DB backend are obviously quite different from the ones based
on a flatfile. The high-level interface only provides methods to add/edit/delete
+ syncing, you do not have direct access to lower level functions (like
upload/downlad). 

A flatfile-plugin.editItem() might look something like this:

getFileMTime() -> mTime of the ics file, server-side
replaceItem(icsFileAsString,EditedItem) -> string with full ics file
saveFile(string) -> mTime of saved file
startLock() -> boolean, true if success
extractItemMTime(ID) -> mTime of item, OPTIONAL

function editItem(EditedItem)
{
    FMD=getFileMTime()
    do {
        do { 
            s=getFileAsString();
            EditedItem.mTime=Calendar.mTime=FMD;
            //START OPTIONAL BLOCK 1
            if extractItemMTime(EditedItem.ID) > EditedItem.mTime{
                alert("Item X has been changed externally");
                ....
            }
            //END OPTIONAL BLOCK 1
            s=replaceItem(s,EditedItem);
         } while (FMD=getFileMTime() > Calendar.mTime);
        Locked=false;
        while doesFileLockExist(){
            Locked=true;
            wait(0.1);
            T=getLockStartTime();
            if getServerTime() > T+TimeOutParameter{
                killLock();
                Break;
            }
        }
    } while (Locked);
    if startLock() {
        mTime=saveFile(s);
        killLock();
    } else {
        alert ("Could not lock file");
        ....
    }
    //START OPTIONAL BLOCK 2 FOR THE PARANOID
    if extractItemMTime(EditedItem.ID) != EditedItem.mTime{
        alert("Item X was has been overwritten");
        ....
    }
    //END OPTIONAL BLOCK 2
}

A DB-plugin.editItem() will be much simpler since the DB itself will take care
of locking...
(In reply to comment #41)
> most queries only involve the ID. Other queries (on the content of the object)
> are less frequent and do not need to be as fast

Not true again. queries for id are not frequent. queries for content is very,
very frequent. remember that start time etc are part of the content. The main
display is a list of events for a specific time (rendered in some funhey way,
but basicly a list of events). This is the most used query.

> Locking should be implemented by the plugin (e.g. XXXplugin.editItem).

Yeah, but again _how_ do you lock a file on a remove server with a high lag?
Putting a lockfile in the dir doesn't work because of the lag.
I'm not asking for code, just for an idea.
> Not true again. queries for id are not frequent... 

If you refer to "get events for day X", I believe that libcal must be used, a
simple db query is not enough, since you need to understand how repetition
works, and I do not think you can do that in sql... ...and since you need libcal
anyway to process the information, you might just use it also for querying the
objects in memory... When I say query for content I mean pure db queries, like
"select * from events where subject like '*blablabla*'"... 

> Yeah, but again _how_ do you lock a file on a remove server with a high lag?

With high lag, I would guess the best approach is to use webdav, which, from the
little bit I read about it is supposed to be "safe" under such circumstances.
But you might use a local network so that lags are not such a big issues and
using file flags should be good enough. Having plugins, you could choose the
backend most suitable to you. My uninformed guess is that a FS that does not
allow files to be directly overwritten (at the time of the local write
operation), so that an explicit rm command is required, should be ok. 
See bug 263620 for more about sql and calendar
Good one! As mentioned I really like the idea of mozStorage, this will obviously
make plugins a no-go... Provided mozStorage will be able to support multiuser
access and offline/syncing capabilities... Once there is syncing it is
conceavable to create another type of plugins, not directly linked to the
application storage (as the ones illustrated above), but between mozStorage and
third party backends. I really think that multiuser/syncing should be built in
within mozStorage.
I just added this line to calendar.js > refreshRemoteCalendarAndRunFunction
(first line of first else{} block) and it seems to work... 

gCalendarWindow.calendarManager.reloadCalendar( calendarServer );

External changes (either by scripts or by other users) to local ics files are
not overwritten... It also works as a poor man shared calendar on a LAN... It
would be nice to modify reloadCalendar() so that files are not reloaded if mTime
< storedMTime, this will remove unnecessary operations. I am not sure how to
work on the fs in js... It would be also nice to have a function to call
reloadCalendar for each local calendar every X minutes (X specified in prefs.js)...
That is not the right thing to do. If i have a file open in a program, i don't
expect that changes to that file from other sources will destroy the data in the
app. At least not without asking me.
Michiel 

From what I can tell there is no exclusive lock from SB on the "local" ics
file... You open the file read the content and then write back to it whenever
there is a change.... In the meantime other programs (or users) might well
change the file... All those changes will be lost... I am implementing a reload
function that will only execute if the file has been changed externally, this
way single users will not be affected performance-wise... And while I am there I
will probably also add simple file-flag-based locks, and autoreload... So you
will have a poor man shared calendar for LANs if you want...
Yes, and the reloading is exactly the problem. If i open a text file in my
text-editor, or a image in my image-editor, and i modify the file using
something else, i don't expect the data in the app to be lost. I can ask if it
should reload, but it should not do it whatever.
So if you add reloading, you should make it optional, and default to off.
Absolutely, it will be dependent on a userpref for the users that want to have a
poor man shared calendar on a lan without having to wait for unifiedstorage, for
the others it will be life as usual. 
(In reply to comment #47)
> That is not the right thing to do. If i have a file open in a program, i don't
> expect that changes to that file from other sources will destroy the data in the
> app. At least not without asking me.

I have mixed feelings about this... External apps might still improperly tamper
with the ics file while sunbird is not running and obviously disrupt things...
You are only "protected" while sunbird is running by flushing the file
("locally" or remotely) disergarding external changes... But if you have
external apps improperly configured to mess up with SB files, then the end
result will be the same... Sooner or later the ics file will be corrupted...

On the other side having an optional shared property (or the equivalent "Publish
Changes Automatically"), if a single client unchecks the box, he can ruin the
life to all other people sharing the same file... 

I think therefore that it makes more sense to always assume shared mode (both
for local and remote files), without using an option at all... 

Furthermore "shared" and "publish changes automatically" are mutually
exclusive... Either you share a file on a lan or you share a file on a remote
location, you cannot have two reference points simultaneously... This should be
reflected in the user interface... 

I really think that both the code and the gui, as far as the dicotomy
remote/local calendars go, should be reorganized because it is confusing for
users and for developers... And potentially unsafe...

What about replacing the publish mechanism with a sync mechanism similar to the
one I suggested in http://wiki.mozilla.org/Mozilla2.0?UnifiedStorage
Maybe instead of having a difference between local and remote calendar as
defined by the protocol to access them (file: vs http:), we could define it as
stored inside the profile, or external. (or call is private and subscribed.)
(i'm not sure if private file really should be constrained to be inside the
profile, but you get the idea)
Then for inside files, assume nothing else touches those files, and just don't
reload it. No need.
For external files, you should do reload and everything. (optionally, make them
read-only.)
If done that way in the backend and in the interface, and we manage to come up
with good locking and reloading for http files, i guess the risks of loosing
data are a lot smaller.
Biggest problem: the locking stuff over slow connections.
As an alternative, shared files on LAN could be identified by using "file://",
and also in this case we keep both a "local" file and obviously the remote
file... This way, if later on there is a sync function, you already have a
"local" file  to sync from... This is useful if the calendar is shared over a
Lan and you have a laptop...

In any case the "local" path (the one in the profile) should probably be hidden
away, the user cannot change it directly (if they really want to do so they can
always edit the rdf file manually)... So only one box in the dialog: remote
calendar. If it is blank it is only a "local calendar". And for all remote
calendars including ones on the LAN we assume "shared" mode. So no need for
"shared" or "publishchangesautomatically".... So we avoid those two
parameters... But we add autoreload and readonly.

Other simplification: create 

calendarManager.calendarArray=new Array();
calendarManager.loadRDF() -> calendarManager.calendarArray 
calendarManager.saveRDF() <- calendarManager.calendarArray 

And only work with the calendarArray, do not use get/setAttribute directly. This
also should speed things up since you only access the file after a property is
edited... With the additional advantage that I can add calculated flags only
once in the loadRDF (like isRemote)....

Other simplification: New calendar/subscribe can be unified in a single dialog... 

Furthermore there is a lot on duplications on both xul and js regarding Remote
and Local functionality... It is easier to use the same dialogs and
methods/function and split the code as appropriate within the
methods/functions... For instance, there is no point in having
editLocalCalendarDialogResponse and editServerDialogResponse, separate xul
(localCalDialog.xul and serverDialog.xul), separate launchers
(launchEditCalendarDialog and launchEditRemoteCalendarDialog)...

> For external files, you should do reload and everything. (optionally, make
them read-only.)
> If done that way in the backend and in the interface, and we manage to come up
> with good locking and reloading for http files, i guess the risks of loosing
> data are a lot smaller.

Good for me!!! To work then!

> Biggest problem: the locking stuff over slow connections.

The lock itself should be relatively fast.It is an empty file. You need a method
to read a file modifiedDate on the remote filesystem. And if we compare the
edited event to the same event for the newly downloaded file we can at least
detect if there were problems... 
I forgot, untill the sync function is available, if a calendar is remote once
you start up you must download... And if the remote calendar is not available it
should be marked as inactive (when you try to mark it as active manually you
check if the remote calendar is available before making it active). Using the
local file when there is no connection is not a good idea since then reconciling
your changes with the ones of other people is a problem (without syncing). 

-> No point in having "download remote calendar on startup"... Later on you will
have online/offline switch...
actually mark it as read-only not inactive...
Confused -- please correct me if I am wrong.

I am using Thunderbird 0.9 with daily build of Calendar Extention. 

Here is an email message that I got to my mailbox:

MIME-Version: 1.0
Content-Type: text/calendar; method=REQUEST;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook, Build 10.0.2627
Importance: Normal
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2180
X-UIDL: (n<!!I$g!!Qop!!P,>!!

BEGIN:VCALENDAR
PRODID:-//Microsoft Corporation//Outlook 10.0 MIMEDIR//EN
VERSION:2.0
METHOD:REQUEST
BEGIN:VEVENT
continued

The email arrived as an inline text. If my understanding is correct, I should
have viewed it as an attachment according to "Make sunbird recognize the .ics
extension as a text/calendar MIME type" and sunbird should have accordingly
reacted to it. Am I missing something?
(Regarding Comment #57)

I'm hope that libmime will be enhanced to identify ics attachments
and treat them like a vcard... which could look like the attached.
QA Contact: gurganbl → general
(In reply to comment #56)
> actually mark it as read-only not inactive...

Has there been any action taken on LAN file locking since this?

If not, I cannot see that Sunbird is a usable replacement for Outlook, or other
shared calendars.  Sadly, a paper calendar on the wall would be more useful for
a small workgroup needing to co-ordinate their activities.

In fact, if the situation is still as described above, I would have to ban the
program completely from the networks I manage  --  because of the risks to user
data.

Technically, I agree with most of "ago's" positions.  Small workgroups are not
capable of running their own web server, let alone a webDAV server.  The only
practical solution for small groups would be LAN-shared calendar files, with
file level locking.
(In reply to comment #59)
>
> Technically, I agree with most of "ago's" positions.  Small workgroups are not
> capable of running their own web server, let alone a webDAV server.  The only
> practical solution for small groups would be LAN-shared calendar files, with
> file level locking.

They're not?  Given that both Apache and IIS come with WebDAV support, what's
the difficulty?
(In reply to comment #59)

I agree completely. We are facing the same conclusion that given the nuimber of lost appointments due to overwriting we are using paper until file locking or a reload prior to publishing gets fixed. Its a pitty that one issue prevents its use.
Reassigning all automatically assigned bugs from Mostafa to nobody@m.o

Bugspam filter: TorontoMostafaMove
Assignee: mostafah → nobody
bug 352333 and bug 287612 talk about the read before put. Also, to a lesser extend, bug 329570.
This is an extremely wide-ranging and important discussion, but in order for any of it to be useful, we need to file specific bugs for each item.

For all "emailing ICS using Lightning" you should search for iTIP and iMIP in the subject line and see what current bugs we face in that area.  Bug 334685 was the giant feature bug that landed iTIP support (which is in current nightlies).

For ICS file (i.e. our iCalendar/WebDav provider) issues, we'll be managing that through a series of bugs.  
* Bug 287612 -- using etags for Webdav
* Bug 373430 -- this issue w.r.t. to FTP based calendars
* Bug 329570 -- high level, catch all, but can be used for flat ICS file/LAN issues 

For CalDav based calendars we follow this issue with: bug 362698

The bulk of this issue was addressed during the refactoring of the backend support - Sunbird no longer uses flat ICS files to store all its data.  Instead we have provider architectures (as you see above) and those are used to access all calendar data.

Since bug 329570 refers to the current implementation and since it is far easier to follow, I am duping this bug against that one.

If there are other specific issues in this bug (and in the current product) that you do not feel are addressed in 329570 or other existing bugs, please file a new bug to track those specific issues.
Status: NEW → RESOLVED
Closed: 17 years ago
Resolution: --- → DUPLICATE
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: