this blog will cover some topics from my personal experience in professional life. This will include Scrum as well as technical stuff
found results: 19
<< 1 ... >>
latest blog entries
latest updates
Security und Passwortstrategie
Whatsapp und Datenschutz! Sorglosigkeit in Deutschland...
category: global --> E-Mobility
2022-08-04 - Tags: Tesla ComanyCar E-Mobility
originally posted on: https://boesebeck.name
Leasing of the Model3 is running out, the current leasing company would almost double the leasing rate if I took the same vehicle again. Because of that and because of the experiences with Tesla and electric driving, it's a good moment to look for alternatives.
Yes, I was and am actually quite satisfied with the Model 3. Unfortunately, I also had some not so nice experiences with Tesla. As far as service is concerned, it's really a bit difficult. A lot was rebuilt and changed at Tesla, restructured and some things improved. But all of these changes naturally took some time to take effect, leading to "teething troubles" and "birth pangs" - unfortunately almost all of them to the detriment of Tesla drivers.
All in all it's ok with Tesla because normally you don't need any service at all or anything like that. According to Tesla it would be ok to do a service after 4 years, but it is not necessary. In contrast to the other manufacturers who force you into their workshop at least once a year.
Still, the whole thing gives you a "weird" feeling. But I also wanted to include this point in the calculation and wanted to document my decision-making a little and to compile my assessment of the various criteria.
Since leasing has become significantly more expensive and there are still a few problems with the Tesla service, I also want to take a look at other vehicles - fortunately there is already a small selection in the electric vehicle segment. And yes, I wanted to stay with Elektro. On the one hand, because I think I get more car for the money and because it's also nice not to leave exhaust fumes where you drive. In my opinion, the fact that you only visit gas stations to have your car washed is also an advantage.
The candidates that could be considered as successors should of course be "similar" to the Model3. I paid particular attention to driving pleasure, suitability for travel and the driver assistance systems / software, which is why candidates such as a 'Renault Zoe' or a 'Mini E' are ruled out for the time being... The price is of course also an important factor that influences the decision. In order to be comparable with the Model3, the "large" equipment variant had to be used for comparison in almost all vehicles. Very often there are also cheaper options, but it should be similar in terms of range, performance, etc.
Delivery time was also a problem, the leasing expires in June 2023 and then a vehicle should be there again. Surprisingly, this is now a serious factor in August 2022 - some vehicles are eliminated due to delivery problems (Volvo, for example).
I've given a few very subjective ratings here. This is my personal opinion on the subject and everyone can come to a different conclusion. In particular, the weighting of the individual categories will differ for one or the other.
As already mentioned, the evaluation is very subjective and, above all, comparative. This means that if a vehicle has more points in a certain category than another, it only means that it was subjectively better in this comparison. This is not an absolute statement! Please keep that in mind when looking at the reviews.
I award 0-100 points in each category, although this is really only a subjective assessment.
Each category has different weights for me. Since I value driving pleasure more than e.g. range, these points are counted twice.
So when I put it all together, I come up with the following weightings:
Category | weight | Remark |
---|---|---|
driving fun | 2 | Acceleration, sportiness |
Services | 1 | Workshop, maintenance etc |
Reach / Load | 2 | Charging power and duration on road trip |
Driving Assistance | 3 | Lane Departure Warning / Cruise Control etc |
Software | 2 | Over The Air Updates, Navigation, Infotainment |
Place offer | 2 | How much space in the car itself, trunk, frunk |
Optics | 2 | The appearance, personal taste |
For the vehicles discussed here, I also calculate a (theoretical) value for the time that would probably be needed for charging on a 900km trip.
I assume that the real range is not really reached at 100% when charging during the trip, but you do start with 100%. Then each of 10-80% loads and progresses accordingly.
Example: with a theoretical range of 500km, you would probably get pretty much exactly 450km before the battery only has 10% left.
Then you would have to load up to 80%. Let's assume that this takes 30 minutes. Then you would try to continue driving, since you have only charged to 80%, you only have a range of approx. 350km (you don't drive to 0, but to 10%).
Then there are still 100km missing, which you could reach again by charging - in this case you would have to charge again for about 10min for the remaining 100km. But then you would probably arrive with 0% SOC, which is normally not wanted. In other words, to arrive at the end with at least 10%, you would probably have to charge for about 42 minutes on a 900km trip.
Of course, this assumes that there are no fluctuations in consumption (differences in height) or similar factors, this is a purely mathematical value that can probably not be achieved in reality. But it clarifies a little how the charging speed and the range (and thus also the consumption) in combination influence such a trip.
Because the WLTP range in itself says little and the total charging time on our road trip described above may be somewhat misleading, I generate a point number for this point "range / charging" from the normalized product of charging time on the road trip and maximum charging power ( because the latter is an indicator for good charging electronics for me).
So the range is the real range listed on ElektroautoVergleich and the average charging time listed there for 10-80% and the average charging capacity of 10-80%.
Put together, these values ââthen result in the rating for range / charging.
I was really looking forward to the Polestar 2. The car looks great and I think it's really nice to look at. The space in the trunk is good, at least larger than the Model 3.
But of course there is a "but": I found it super "stocky" in the vehicle. The very wide center console in particular (according to Polestar, this is only due to the combustion engine base) was really annoying. For me as a driver it was too narrow and uncomfortable. The rear seats are quite roomy and ok so far.
But the Polestar 2 is really super sporty to drive, it performs well and is also good around corners.
The software is a bit "strange" - I found Android Auto (which is the operating system used there) surprisingly unintuitive and I got "lost" in the menus. In the meantime we had 2 navigation systems running, A Better Routeplanner
was installed on our demonstration vehicle and ran in the background, together with our own Google navigation. That would be ok so far, but the two had different destinations - and we didn't manage to switch off either of them for a while.
The driving assistance is ok so far, but the lane keeping assistant has a problem finding the "middle". Sometimes it drove too far to the right, sometimes too far to the left... then it wanted to turn off on the Autobahn. I had no real faith in the thing.
The "smart" cruise control was ok so far, although it's very weird that the Polestar recognizes the speed limit but doesn't automatically set the cruise control!
The Polestar cannot really score points when it comes to charging technology either, because 1. its consumption is too high with a good range (486 WLTP and approx. 395 km in real terms) and 2. it charges too slowly on average. On the ElektroautoVergleich page, an average charging capacity of just over 100kw was measured - that's a bit low. But loading from 10 to 80% is done in just over half an hour with 32 minutes. Nevertheless, the Polestar (purely arithmetical) has a pure charging time of at least 74 minutes for a 900km trip.
The price is worth mentioning, in the Model3-comparable equipment we are around 71000,-âŹ. However, leasing is a problem. Polestar couldn't offer a corporate lease without a down payment, I phoned umpteen people, pestered the salesman in the branch. Everyone said it couldn't be done. At some point, a supporter wrote me an email saying I had to "submit an application". That was the amazing thing: contrary to what the sales staff in the Polestar branch said, it does work, but only as the last step in applying for leasing for the vehicle, which works 100% online, similar to Tesla! The service was once again the real problem.
Unfortunately, I have to give the Polestar lower marks:
Category | dots |
---|---|
Driving pleasure | 85/100 pts |
Services | 50/100 pts |
Reach / Load | 50/100 pts |
Driving assistance | 60/100 pts |
Software | 85/100 pts |
Space offer | 85/100 pts |
Optics | 88/100 pts |
Total | 73/100 pts |
You would have to compare it to the Polestar, since both are built on the same basis. There are actually only major differences in the built-in infotainment system, which - from what I've seen - is a little more confusing than that of the Polestar. Unfortunately I couldn't drive the car and unfortunately it wasn't a real alternative for me due to the delivery problems. The Volvo has a slightly shorter range than the Polestar (448km WLTP / 350km real), but probably the same charging technology. In purely arithmetical terms, the car takes about 92 minutes to charge on a 900km road trip - that's surprisingly bad, only the Mach E is slower.
Category | points |
---|---|
driving fun | 89/100 pts |
Service | 90/100 pts |
Range / Charge | 42/100 pts |
Driving assistance | 80/100 pts |
Software | 80/100 pts |
Space | 80/100 pts |
Optics | 88/100 pts |
Total | 78/100 pts |
The Ioniq was also one of my favorites, it looks really futuristic, offers a lot of space and with Hyundai you would think that they can do electric cars. They can too, the Ioniq is in a great position when it comes to charging performance.
Unfortunately, I found a few problems in my tests - the driver assistance systems in particular were more dangerous than helpful in my case. Apart from the fact that the lane departure warning system had trouble staying in lane, it simply switched itself off without an audible or any visible warning! However, the cruise control remains activated, i.e. you do not notice at all that the vehicle is now simply driving straight ahead. In the display in front of the steering wheel (an advantage) there is an icon that shows whether the driver assistance is on or off. When it's off, the icon is gray; when driver assistance is active, the icon turns light green. With changing light conditions, it's almost impossible to tell the difference! I'm not the only one who noticed this. There are also some videos on youtube about it. Unfortunately... If such an assistance system is installed, then it must be safe! The excuse: That's just a "normal" car, doesn't apply.
What is also a real problem: the navigation has no integrated charge planning. If you also want to use the vehicle for longer distances, this is really rather impractical. That should actually be standard in 2022.
On the other hand, the built-in entertainment system supports Apple Carplay and Andriod Auto, so you can make up for a few small problems (strangely enough, the cordless version is only available in the "smaller" model variants).
Charging performance is really where the Ioniq can shine. Super fast 190kw is loaded from 10-80% on average, this is the fastest in this class! But unfortunately the range is a bit lower, it is given as 481km WLTP, in real terms it should be around 390km. On a 900km road trip, you would only need to spend around 40 minutes charging the car. Still undefeated!
And in this class, the Ioniq is the cheapest vehicle at just over âŹ66,000
Category | points |
---|---|
driving fun | 85/100 pts |
Service | 80/100 pts |
Range / Load | 94/100 pts |
Driving assistance | 50/100 pts |
Software | 92/100 pts |
Space | 95/100 pts |
Optics | 85/100 pts |
Total | 81/100 pts |
Genesis is Hyundai's "luxury" brand, so the GV60 is the closest comparison to the Ioniq 5. The charging technology is identical, but the vehicle is a bit sportier and a bit smaller! So there is less space here. However, the infotainment system is structured differently. There are lots of gimmicks here too. The Genesis also has a lot of power, especially in the sportier version, and can even drift! In terms of fun, it is probably unbeaten in this price segment, but unfortunately 1. not available and 2. relatively expensive at âŹ78,000 (for the sports version). Unfortunately not so interesting in leasing because the residual value is set relatively low.
What I found amazing is that the maximum power is not just available here, but you have to press a stupid button and then the power is on for 10 seconds or so. Feel like a crutch, especially in 2022.
Category | points |
---|---|
driving fun | 85/100 pts |
Service | 80/100 pts |
Range/Load | 91/100 pts |
Driving assistance | 80/100 pts |
Software | 70/100 pts |
Space offer | 85/100 pts |
Optics | 80/100 pts |
Total | 82/100 pts |
One of the most expensive here in the ranking. If you take the configuration with the large battery, all-wheel drive, the price is around âŹ74,000.
The Mustang Mach E is really fun, it's a really good electric car. The infotainment system supports Apple CarPlay and Android Auto, bypassing the Ionity binding in the navigation system. Battery preconditioning is not supported anyway, so there really is no need for the car itself to know the route you are driving.
The software is really good compared to what else is offered. If you like, you can get lost in the settings menus and adapt every important and unimportant little thing to your own needs. It's really nice and there's always something to "play". Even a small app to pass the charge time was thought of - a sketch app with which you can paint small pictures (Tesla sends its regards).
The navigation system is also really good, shows everything you need, especially the charging stops and how long you have to charge where and with how much SOC you arrive. Unfortunately, you are tied to Ionity charging stations, which is a problem. Because after one year you lose the free customer status with Ionity and have to register there for 12⏠a month in order not to pay 79ct/kwh. This is a no go. But as already mentioned, you can circumvent this quite well with the integration of AppleCarPlay.
Unfortunately, there is also no heat pump, which reduces the range, especially in winter. As far as that is concerned, the Mustang is doing quite well with WLTP 540km (in real terms around 430km). Unfortunately, the average charging power is rather slow at 86kw between 10-80%, it is by far the worst in this comparison. Above all, there is the fact that the charging curve in the Mustang breaks down very strongly from 80% - sometimes to less than 22kw!
Arithmetically, you would have to allow at least 93 minutes for loading times for a 900km road trip! That's really a lot and here in comparison with the slowest.
However, the driver assistance systems are really ok, the lane departure warning system was really good and kept the lane well, the cruise control works well, does not brake too harshly and does not drive too close. The only vehicle I've tested where I could build trust in the driver assistance systems similar to Tesla!
A special feature of the Mach E are all the 'gimmicks' - you can set it up so that there's an engine noise in the cabin when you accelerate (it sounds a little like a V8 - but really only a little). The Mustang also lights up when you approach it in the dark with the key. And a Mustang logo is projected onto the road! Really nice gimmicks - everyone has to decide for themselves whether that justifies the additional price. Not for me.
The space available in the Mustang is relatively good, although not really outstanding in comparison. Although the Mustang is not built on a combustion engine basis, there is still very little space inside. The "floor" of the vehicle is relatively "thick", which takes away the depth of the whole thing, and you can also feel it in the trunk. In comparison one of the smaller trunks.
The frunk has a nice feature though: there is a hole through which water can drain. This is good in that you can easily wash out the frunk. Ford gives the example that you can store ice in the front so that drinks can be chilled. Nice gimmick, really.
If you're a Tesla driver and familiar with the system, there are no surprises with the Model Y. The Tesla is relatively cheap in comparison (âŹ67,370.00), and is also one of the cheapest in leasing because the residual value can also be set relatively high (anyone who has ever tried to find a used Tesla knows what I mean).
The special feature of the Model Y is - in my opinion - better optics and the significantly better space. According to the manufacturer, the trunk with a volume of more than 900l (if the rear seat is not folded down) is twice as large as, for example, the Ford Mustang (of course, the numbers are typical American embellishments, but Fords too!).
The seating position on the Model Y struck me as very comfortable. You don't sit as sportily deep as with the Model 3 and yet you still have the go-kart feeling. The fact that the higher seating position is good for me may also be due to my advanced age đ . Nevertheless, the chassis has really gotten better and compared to my Model 3 from 2019, the Model Y (even the Performance variant) feels much more comfortable. In general, it has become much quieter in the car. The wind noise could be heard very clearly in the Model 3, especially on the motorway. Which is why you didn't want to drive so fast đ . But with the Model Y it's much quieter, hardly anything to hear. And compared to the other vehicles I've tried, there really isn't much of a difference.
If you now examine the numbers more closely, the Model Y is the second fastest "loader" in this list (shares the place with the other Teslas đ ). Charging from 10 to 80% takes just under half an hour in 27 minutes, a really good value. Together with the real range of approx. 435km, we get a total charging time of 53min on a 900km road trip!
Tesla's software is the most mature and best to use. There are no problems with the navigation, there are no problems with the settings or anything. Yes, you could say the Tesla is a "computer on wheels" - that doesn't have to be a disadvantage. Especially since the driving experience is largely due to the infotainment system and its appearance.
The binding of the navigation to Tesla's own supercharger network could be seen as a disadvantage. But that's not really the case, as Tesla has more charging points in Europe than the rest combined. I.e. I don't have to load somewhere else. And that's one thing, especially when it comes to the price: Ionity charges "non-members" 79ct / kwh - if you don't want to pay that, you have to pay a fee of âŹ12 per month. This is really a stupid way to retain customers. In addition, you can find other charging stations in the app of the Tesla, but they are not included in the route planning - unless you add them manually.
At Ford, the connection to Ionity is more of a disadvantage, because they have many charging stations, but not as many as would be necessary in some areas. Tesla's charging network is clearly a very big plus point, which you can see in the rating under Range / Charging
- 10 points are added for the super easy-to-use charging network.
And that is exactly the advantage of Tesla over everyone else here: I can get into my car at any time, with any charge level, tell the computer to please navigate me to XYZ and I don't have to do anything else. With the other vehicles, no matter which one, I always have the problem that I need authorisations, cards, roaming etc. for the charging stations on the way. This is really unnecessarily complicated.
And yes, that advantage is fading as Tesla wants/needs to open up the charging network to other non-Tesla vehicles. But then these are certainly not stored in the navigation systems and using them is just as stupid as with the other providers: download the app, use the payment method, activate the charge station, etc.
One "downside" is that Tesla doesn't support Android Auto or AppleCarPlay. With Tesla, however, I only noticed this a little negatively, since the functions of the on-board software are sufficient. Apple Car Play is particularly helpful if your own system doesn't offer all the features (like the Hyunday) or isn't that great in itself.
If you mention the software at Tesla, the most important non-driving-related features should not be missing:
Speaking of the software. This can not only be found on the vehicle itself, but also in the Tesla app. And that's something special. Not only can I control the air conditioning, see what the battery level is, open the trunk and frunk, unlock or start the vehicle. I can see where the vehicle is, I can even "remote control" the vehicle (in Germany you have to be nearby for this to work). And all of this works super easily and quickly. You can even use the app to schedule service appointments. The app is a great feature!
I'm currently awarding 50 points for service because it really wasn't that great. The whole thing is getting better, but planning via the app is actually not that much of a problem. The problem is the processes and the people behind them. I hope Tesla gets this under control. But since you (hopefully) hardly ever have to go to the Tesla service, it doesn't matter that much.
Category | points |
---|---|
driving fun | 90/100 pts |
Services | 50/100 pts |
Range/Load | 77/100 pts (10 point bonus for the charging network) |
Driving assistance | 90/100 pts |
Software | 90/100 pts |
Space offer | 100/100 pts |
Optics | 85/100 pts |
Total | 86/100 points |
Here you also have to make a little difference between the performance and the normal variant. The performance variant clearly looks better and offers clearly more driving pleasure... that's why several points are awarded here. When charging, the Performance model only has to plan 5 minutes longer on the 900km trip than the LongRange model. So the two are about the same. On the other hand, the performance model looks a lot better than I had previously thought. The car is kind of weird in that context. It doesn't look that great on the internet, but in real life it's a super nice car to look at, especially the performance variant!
Category | points |
---|---|
driving fun | 98/100 points |
Services | 50/100 pts |
Range/Load | 74/100 pts (10 pt bonus) |
Driving assistance | 90/100 pts |
Software | 90/100 pts |
Space offer | 100/100 pts |
Optics | 95/100 points |
Total | 88/100 points |
In this comparison, my current vehicle should not be missing in the new version. Since I am concentrating here in particular on driver assistance systems and software, the Model3 comes out with almost the same rating as the Model Y - with deductions in terms of space and appearance (a matter of taste).
Unfortunately, the Model 3 LR is also a little behind because it is significantly more expensive to lease than the comparable Model Y.
The charging time for a 900 km trip of only 44 minutes is also interesting here! That's really a good value - and despite the significantly lower average charging capacity of 124kw, the Model 3 LR is on a par with the Genesis GV60 Sport, which can offer a charging capacity of 190kw, but unfortunately has to stop to charge more often because of the shorter range!
Category | points |
---|---|
driving fun | 90/100 pts |
Services | 50/100 pts |
Range/Load | 86/100 points (incl. bonus) |
Driving Assistance | 90/100 pts |
Software | 90/100 pts |
Place offer | 70/100 pts |
Optics | 70/100 pts |
Total | 79/100 pts |
The comparison to the normal Model 3 (i.e. rear-wheel drive, no long range) may also be interesting here. With the same calculation basis, this takes 61 minutes to load on the 900km trip.
Category | points |
---|---|
driving fun | 75/100 pts |
Services | 50/100 pts |
Range/Load | 57/100 pts (incl. bonus) |
Driving Assistance | 90/100 pts |
Software | 90/100 pts |
Place offer | 70/100 pts |
Optics | 70/100 pts |
Total | 75/100 pts |
By the way, the Model3 Performance only needs 49 minutes for the trip - and according to the comparative evaluation scale it is clearly ahead.
Category | points |
---|---|
driving fun | 100/100 pts |
Services | 50/100 pts |
Range/Load | 81/100 pts |
Driving assistance | 90/100 pts |
Software | 90/100 pts |
Space offer | 70/100 pts |
Optics | 75/100 pts |
Total | 82/100 pts |
Unfortunately, I was not able to test drive the Audi, i.e. these values ââhere come from online research and friends and acquaintances that I asked. I've driven other Audis and I can make sense of things I've found on the internet.
I find the strange "castration" of the charging capacity of the Audi astonishing. If you take the small battery, you can only charge with a maximum of 100kw. That sounds a bit as if this were one of the first electric cars ever. All the more astonishing, because with the big e-tron they have shown that things can be done better.
The large battery charges with a maximum of 125kw and the top speed is 160 or 180 km/h. Funny concept too. However, what is really strange is the performance that you only get in full if you press a button before and then only for 10 seconds max. Why?!?!? I find that unnecessarily cumbersome.
Well, based on what I read there, the Audi was eliminated from the start. As an electric vehicle in the Tesla squad only in terms of price - approx. 69000⏠in the Tesla-like equipment and the leasing was one of the most expensive of those listed here.
Also on our 900km road trip, the Q4 Etron is only in the middle. At 47 minutes, it's not the worst (that's the Mach E), but it's not really up there either. The reason behind this is the low charging capacity (on average 103kw of 10-80%) and the relatively poor range of reel 385km.
I also don't find the car attractive, but that's my subjective opinion.
Category | points |
---|---|
driving fun | 60/100 pts |
Services | 80/100 pts |
Range/Load | 51/100 pts |
Driving assistance | 80/100 pts |
Software | 70/100 pts |
Space offer | 80/100 pts |
Optics | 60/100 pts |
Total | 66/100 pts |
The ENYAQ is the most sensible of the vehicles tested here. At least it looks like a sanity car. Unfortunately. There's no real fun with the tame look. The driver assistance systems and the software have probably been taken over by the ID3/4 and not the best in the ranking (albeit better than with the ID models). For the road trip we would have to add 67 minutes pure loading time, which is really one of the worse values here in comparison.
Category | points |
---|---|
driving fun | 60/100 pts |
Services | 80/100 pts |
Range/Load | 57/100 pts |
Driving Assistance | 50/100 pts |
Software | 90/100 pts |
Place offer | 90/100 pts |
Optics | 50/100 pts |
Total | 66/100 pts |
##BMW iX3 I was really disappointed with the BMW. I thought that the people of Munich would manage to put up a great electric car, especially after their experience with the i3. Unfortunately this is not the case. In terms of design, I think they have galloped a little too.
The iX3 is by far the most expensive to lease, the slowest to charge (80min for the 900km road trip) and offers the least power. However, I assume that the software is typically BMW good and easy to use.
Category | points |
---|---|
driving fun | 60/100 pts |
Services | 80/100 pts |
Range/Load | 57/100 pts |
Driving Assistance | 50/100 pts |
Software | 90/100 pts |
Place offer | 90/100 pts |
Optics | 50/100 pts |
Total | 58/100 pts |
Our industry leader, so hyped by the media, now also makes electric cars. Yes, unfortunately just "also". Unfortunately, the software in particular is a real weak point in the ID models. It has been improved (fortunately there are OTA updates), but compared to Ford, for example, Volkswagen still has a lot of homework to do. But there seems to be some hardware issues here too: looking at it, I'd say the CPU/GPU is undersized, as lame as the screen builds up at times. Seems like they have clearly saved money at the wrong end!
There is virtually no vehicle in the same performance category as the Model 3, so the comparison is a little unfair. If I rate the driving pleasure relatively poorly here, that's because I could only configure the ID4 with a maximum of 256 hp. So it's not really a competitor in that area either, because from 0-100 it's almost 4 seconds slower than the Ford Mustang (and the Model 3 is 3.5 seconds to 100!).
Here, too, you are tied to Ionity in the navigation system, with which you can probably charge cheaper for the first year - but again from the 2nd year then 79ct/Kwh or 12⏠per month.
For our virtual road trip, a total of 72 minutes to charge needs to be planned, because of the charging capacity of just 100kw.
It could also be better in terms of space. Neither the ID3 nor the ID4 offer a frunk that could be called that. Also, both are based on VW's combustion engine platform, which wastes space. The trunk is comparable to that of the Ford Mach E. This is far from "THE car".
Apart from the fact that I find the car to be quite boring in appearance. However, I wanted to list it here for the sake of completeness.
Category | points |
---|---|
driving fun | 30/100 pts |
Services | 80/100 pts |
Range/Load | 57/100 pts |
Driving Assistance | 50/100 pts |
Software | 90/100 pts |
Place offer | 70/100 pts |
Optics | 80/100 pts |
Total | 67/100 pts |
It will probably be a Tesla again. And yes, I can almost hear them shouting "You fanboy, why didn't you check out the car XYZ". And yes, there are many other electric cars out there. I intentionally wanted to stay in the same/similar price and performance segment. But the air is already getting thinner.
And yes, Tesla's minimalist design isn't for everyone, I get it. But I'm fine with it. For those who don't have enough screens etc., I can warmly recommend the Ford Mustang.
But of course, these are all SUVs. If you don't want that, then maybe you should give preference to the Model3.
however. I tried to rate the categories from my point of view. And I can truly say that I was open to all of these vehicles. And my personal favorites are the Model Y and the Ford Mustang Mach E! Both super great electric cars, no question.
Here is an overview of the evaluation points:
Vehicle | percentage rating | Remark |
---|---|---|
Tesla Model Y Performance | 88% | amazingly cheap leasing |
Tesla Model YLR | 86% | |
Genesis GV60 Sport | 83% | too expensive |
Tesla Model 3 Performance | 82% | became more expensive |
Genesis GV60 | 82% | |
Tesla Model 3LR | 81% | |
Mustang Mach-E | 80% | |
Hyundai Ioniq 5 | 80% | |
Volvo C40 | 78% | |
Tesla Model 3 SR Rear | 75% | |
Pole Star 2 | 73% | |
VW ID4 | 67% | out of competition, another performance class |
Audi Q4 e-tron | 66% | |
Skoda ENYAQ | 66% | |
BMW IX3 | 58% | far too expensive, loads poorly |
2022-05-07 - Tags: email security
originally posted on: https://boesebeck.name
Every once in a while I take a closer look at email-clients and see, if there is a good replacement for Apples Mail.app. Spoiler Alarm: this search was not really successful, but at least for MacOS I have a solution. On my mobile devices (iPhone / iPad) things are not so nice...
Email is the means of communication not only for business. Having the right tool for the job is definitely a good idea. Apples own email client is not really bad. But it could be better. There are some features a good email client needs to have:
Some of those things are part of apples mail app, but definitely not all of it. Other features you can get by using some plugins or extensions, like GPG support. Some of the things can not be added directly to the MailApp, but to the system - most of the time not in a really good way (like markdown support).
So, there are a lot of Email clients in the App Store, I tried a couple of them. And the result is really sometimes disappointing.
I took a closer look at email clients on OSX and iOS and compared them with the featured listed above. I did not have a closer look at Email clients not supporting standard imap, because those are my primary accounts. So this list is definitely not complete, and it shows my personal experience and my opinion!
Since this is such an important topic for me, let's quickly make a small digression about how email works and what email security is all about.
The emails we use today look very colorful and styled, but they are based on a protocol that dates back to the last millennium. Encryption was not thought of at the time and was of course not implemented. Strictly speaking, there are three protocols:
Simple Mail Transfer Protocol
is a plaintext protocol used to send emails. The security of the protocol has been increased somewhat by tinkering around with 'SSL', which is also referred to as 'SMTP/S'.Post Office Protocol 3
is also a plain text protocol, which is used to read emails from the server. Still in use, even if it's quite limited in functionality. This is also secured with SSL
. Then the communication to the client is at least encrypted.Internet Message Access Protocol
is a more "modern" version of POP3 and offers many more functions. With IMAP, the connection to the server remains permanent and the server can, for example, "let you know" when an email arrives (push). There is also an SSL
-encrypted variant (IMAP/S) here.What's the problem now? In principle, communication with the client is encrypted when SSL is used. This is a good idea per se and nobody should retrieve emails without encryption.
But the big problem lies on the server side, all emails are usually stored in plain text, as normal text files. That's not tragic per se, but then I have to trust the server admin 100%!
And not only them - the server admin from the sending server and from the receiving server, because normally the email arrives on both in PLAIN TEXT1.
And that's not enough either, because sometimes emails are also sent via relays, then I have to trust the admins of those relays as well.
And I don't just have to trust the admins, but also their ability to secure the servers. Because for hackers, that's just what they're looking for: easy access to a lot of personal informaytion - and you get a lot out of a person's emails...2
You could store the emails encrypted on the server side. Sure, but then the server would have to be able to decrypt the e-mails again and then pass them on to the user in plain text (if you want to read the e-mail at some point). This means that the server process must be able to decrypt these emails. And that in turn makes it insecure, because any hackers would then probably have access to this key. The emails are therefore simply filed AS-IS.
Since the protocol for sending emails is plain text and they are also saved in plain text, we have a problem here. But due to the great popularity, it was not possible to roll out a completely secure new communication method just like that. Hence the idea came about to solve that using encryption mechanisms.
Of course, I can simply encrypt the emails and send them to the recipient. But how does he get the key with which he can then also read the mail? Do you want me to email this too? Probably not... so what then?
Luckily there was something called "Public Key" encryption - it creates 2 keys, a private key that hopefully nobody will ever see, and a public key that you can send anywhere.
If you encrypt a message with the public key, only the owner of the private key can decrypt the message. This protects it from third parties, even if admins or hackers have access to the emails.
In the 1990s this was published as a system called 'PGP' or 'Pretty Good Privacy'. Server infrastructures were created where public keys could be stored. You could then search for the recipient email and use the appropriate public key to securely communicate with that one recipient.
As I said, this was created in the 90s, but was super cumbersome to use. PGP was not a real success, but with the open source movement there was soon an "open" implementation to replace PGP - GnuPG
or GPG
for short.
In the end, there is not much difference between PGP and GPG, and it was just as cumbersome to use. Which didn't lead to a "real" breakthrough for GPG either.
There were alternatives that tried to circumvent this. Especially managing the keys, finding the right key to send, then decrypting and making everything safe. The other approaches have tried to shift the problem to a server so that the user no longer notices anything about encryption etc. The "problem" of communication between the servers was addressed, so to speak. But that wasn't crowned with success either, because it can only work if all my receivers also have this technology.
Currently there are somewhat simpler approaches, in the end you no longer use SMPT. The client talks to a server. My email to be sent is sent there and the server takes care of the encryption. And when retrieving to decipher.
Ok so far, but - again the "problem" that the emails arrive at the server unencrypted at least sometime. No end-to-end encryption. You can also tackle this by not using SMTP almost completely and establishing your own email system. But that's kind of a chicken-and-egg problem: it works when there are a lot of people using it, but they only come and use it when there are already a lot of people there... So you need a bridge between the new encrypted email traffic and the old existing one, somehow. Even that wouldn't be the problem, but now you have another data protection problem: for this to work, this server has to have access to my emails. This usually means that I have to store my email accounts somewhere on this server. And now we have the problem again. There is a server that accesses my accounts... No GO!
Ok, a willing troll will tell me then, but the providers encrypt and decrypt the emails on the end device and therefore only store them in encrypted form. That's right, at least in most cases, but doesn't it make the servers particularly interesting for hackers? There is a server whose operating company advertises that the emails are only stored in encrypted form. So explosive information should be found there... A honeypot for hackers.
Actually, it would all be so easy if the clients could, for example, support GPG/PGP. These standards have been around for ages and are easy to send using the SMTP standard. Receiving via IMAP or POP3 is also no problem. Unfortunately, it is very rary that the implementation is really useful.
What does a good and useful GPG implementation need:
And what mail client does combine all those features into one neat software: none! Fortunately, on MacOSX there is the GPG Suite, which I would like to warmly recommend to everyone. This is good key management in an external tool then.
Email clients that support PGP or GPG encryption are very rare. The few that exist often implement their own key management, which almost always entails some problems (it's no different on iOS). Only MailMate supports GPG Suite3
Email encryption should become much more standard. But unfortunately the email clients that support it are really rare.
One problem why GPG and PGP haven't really caught on is that you can't just search emails "like that" anymore. IMAP servers usually offer a server search, i.e. the server searches the emails for specific content. However, if the content is encrypted, this is not really possible. This can be avoided using a local index (i.e. the most important information about the encrypted mail is saved locally after decryption). However, this is only helpful to a limited extent and also reduces the security of the data (again, something is lying around somewhere in plain text).
However, this only refers to the content of the email - all headers and fields that an email brings along (the best known probably: SUBJECT
, TO
and FROM
) remain unencrypted! So the notorious metadata is still in plain text and can also be read "just like that" on the servers.
It should also be mentioned for the sake of completeness. It's about recognizing whether the email has been altered in any way. One could imagine that someone takes the email, changes the content and simply forwards it to the recipient. To prevent this from happening, emails can be cryptographically signed.
Actually this process is quite simple: the sender creates a checksum of the email and encrypts it with his private key. Anyone who has access to the associated public key can display the checksum and check whether it is correct and the email is therefore unchanged.
Most of the time, signing and encryption are used at the same time - better safe than sorry đ
This was another attempt to implement a standard for email encryption. This standard is supported by more email clients than PGP/GPG, but has its drawbacks:
Actually this mail app is not that bad, it has most of the features listed above. Usually you can do everything necessary with it. There are some minor things missing though. Markdown support only via the Service-Menu if you have markdown
installed. You could set a key combination for that, but that is not really convenient: first, type your Markdown text, then mark everything, type your combo, wait a couple of seconds and then you see the formatted text. If you did a mistake, you need to undo it. Then edit your text, mark everything etc...
The GPG Support is good. Install GPG Suite and you're good to go! No hassle therđ
Privacy - no problem here. Apple is not getting your credentials or anything else. Even if they got some, Data is not Apple's core business (compared to other companies). And Apple is always stressing out how much they honor the privacy of users and how much they protect the data. And this makes everybody take a closer look at what they do with data and what might be a violation of their own statements. So, I feel quite ok with that.
The support for rules and automated processing of mails of Mail.app is ok, could be better though. Unfortunately there are no such gimmicks as "Know your from address based on history" or something like that. There were some plugins (feingeist.io) that somewhat did that, but they changed their business model to a subscription based thing for their new tool called MailButler. It is worth a look though.
Here a list of useful tools for the standard Mail.app:
MailMate
depend on that being installed)multimarkdown
support for this to work using brew: brew install multimarkdown
feingeist.io also had a tool called SendLater
which was a simple little plugin only for adding this functionality to OSX Mail. Unfortunately they decided to integrate that into MailButler
- so you cannot buy SendLater
anymore. Really a pity..
This has been my favorite email client in recent years. It is still actively developed and has a great community. This client is very versatile and can be customized in hundreds of ways. It has one of the best GPG integrations for any email client and only works with iMap accounts. Granted, it doesn't exactly look "sexy", but it's very useful đ
Only support for IMAP is a minor issue as it also means you cannot access your Exchange account directly. You must use a tool like DavMail.
MailMate offers different views, a special feature is e.g. the view called ThreadArcs
:
You can see the progress within a discussion thread and select individual emails directly. In the screenshot you can also see the seamless integration of GPG.
Another special feature is the display of statistics. Any header value can actually be used as the value for the statistics. A statistic regarding the email clients used is certainly interesting:
And, as you can see there, the dark mode of OSX is of course supported - it seems that it can't work without it anymore đ
MailMate is not very convenient in some cases, especially when defining your own keyboard shortcuts. To do this, a .plist
file needs to be created and specified in the settings. But in this file you have all the options to choose from. Some configuration examples:
defaults.write
, not with the plist file)One of the best and greatest features of MailMate
is the search functionality: you can specify searches for more or less any field and also define complex searches:
This is really a special feature and no other e-mail client I know of offers such search options.
If that is too complicated for you, you can simply use the general search slot at the top of the window. With the syntax "NAME: value" you can also search certain fields there. You can also use abbreviations for some fields, so you can use s test
to search for emails whose Subject
contains the text test
.
with t me@home.de
all emails that were sent to the given email address are displayed.
And best of all: You can save these searches as a "SmartFolder", which then appears in the navigation tree on the left side of the window.
So I set up a SmartFolder there that shows me all emails including the emails that have been sent. So I always displayed the complete correspondence.
PostBox is based on Thunderbird (whether this is an advantage depends on the user). Normally, the extensions for Thunderbird also work in PostBox.
However, the PostBox team has built its own extensions and added a few new functionalities:
Unfortunately no support for GPG and no Markdown editor (you can install it later via an extension, but in my tests it was only partially successful) and no support for Exchange (unless you use [DavMail](http:// davmail.sourceforge.net) or the Exchange server is also configured for iMap)
This is a very capable Email-Client, has great search functionalities and a ton of options. It is the macOS pendant to the iOS-version discussed further down.
One very good feature of Altamail is the special support for Work-Life-Balance settings. There you can define only to be informed about emails depending on time and/or place. cool thing.
Other than that, I hade my issues with it:
Looks the best of all tested.
Has fairly good Markdown support. No later sending. But it is currently one of the best mail clients. The functionality is really good and the integration with the reminders app and the calendar is really good. There are also features such that individual emails can be "paused" and reappear as new in the inbox after 5 minutes. Nice feature.
What bothered me the most about Airmail was the very poor search function! There you cannot restrict the search to fields. Also, there is no way to create something like SmartFolder.
Furthermore, there is no real GPG support in Airmail. There is a plugin that is supposed to retrofit this function, but it is probably not actively being developed further. I couldn't get it to work.
All this, and the fact that it was not able to display the right number of unread mails in my test until the end, made me decide against AirMail.
Another plus is that Airmail has an equally nice client for iOS. So you have all the features on all devices. However, there is a catch with the iOS version - you have to use Airmail's push service, which must at least raise concerns. I have to pass on my login data to Airmail. that's a no go.
You can certainly argue about the surface, it looks very nice, but I personally find it sometimes really confusing. The look is definitely a plus and I've been trying to really use Airmail for several years now. Unfortunately, it doesn't work, because just looking pretty doesn't help much at work.
The input function is great, you can do a lot with the e-mails, but the functions offered should then also work. Unfortunately that doesn't work.
The imap support leaves a lot to be desired and random directories are created on the imap server (to express certain "flags" of the email, like ToDo. Why not use the functions integrated in IMAP? Tags?), what the heck This means that emails are simply "gone" in other clients and have to be searched for (e.g. in Mail on the iPhone)
And if, for example, there are serious errors when accessing an account (wrong password, network error), the icon around the account representation simply gets a red border... nothing more. I didn't notice for hours that I didn't have access to one of the mail servers. And then not even an error message or information is shown why the access does not work. You have to rely on rates...
Then I have never successfully managed to receive the same number of emails (via Imap) in Airmail and OSX Mail/PostBox/Thunderbird/MailMaite. Airmail just doesn't load some emails... Airmail crashed several times while loading...
What I have already noted in some reviews has still not been corrected: the sometimes hair-raising and often misleading translations of the interface. The best way to use Airmail is in English - that's about right.
But the most serious thing is actually that the features are not up to date. The "Intelligent Folder" can be saved with a search, but it can't really do much. And/or combining links didn't work for me in any way.
The same applies to the rules. The criteria that can be searched for are ok, but there should be a lot more. Also, you can either apply rules to all accounts or just one, which doesn't make sense either (you have to duplicate the rules if you have multiple company emails, for example)
I also find the lack of support for current encryption technologies (S/MIME, GPG, PGP) shameful! It has to be like that these days. and the available plugin, which is supposed to deliver it later, doesn't work at all.
So I can't really recommend Airmail for professional use. Pity
There is one star for the good email editor, the prettier and more modern display and the support of themes.
Now, for this superficially pretty but technically rather questionable thing, money is also being demanded monthly... Honestly, I understand that software developers like to place subscriptions - but then I also expect something from the software but 3.49⏠per month This "mailer" isn't worth a month or 10⏠a year to me.
Summary:
Positive:
Negative:
Canary Mail is definitely one of the better email clients. Especially the really easy-to-use support for encryption and the fact that you can write emails in Markdown are an advantage.
The interface is nice, but it quickly becomes a bit confusing if you have multiple accounts.
It's also great that the settings can be synchronized via iCloud (and not some ominous third-party provider). And if even that is too difficult for you, you can use a QR code.
The dark mode works quite well, but sometimes it looks a bit stupid when creating emails. especially when emojis are inserted.
Some really cool features, like configurable swipe gestures (works great with the touchpad in OSX!), snooze function (so emails come back to the inbox later), pinning of messages and adding stars to messages.
One of the most important features is the support of PGP encryption directly on the device without having to rely on a server.
There is also the option of sending encrypted emails via the operator's own server. A slightly different approach is chosen here: your own email is not sent directly, but only a link to a page where you can then read the email. That's really only partially helpful. But a nice gimmick - not recommended from a security point of view, though.
All in all really solid, but there are a few things that could or better should be improved:
All in all really nice, but unfortunately some important features are missing, at least for me. Especially the lack of rules and smart folders or storable searches is a MUST for people with a lot of emails. The lack of support for code blocks in markdown is adding to this list unfortunately.
Unfortunately, I can't really use the app like this. Pity...
With iOS you have to make a few compromises, for better or for worse, that are due to technical reasons.
Polling somehow only works with Apple's own app, all others depend on push notifications. But that also means that you will only be informed about new emails if you activate this service and somehow give it access to your own account. This is a no go in my opinion! Because the emails have to go through the provider's server so that they can show a preview in the push message. I really don't think it's necessary for Apple to limit it so much. The polling works with all mail clients here in such a way that iOS tries to "guess" (aided by AI) when the polling should take place. It may well be that it makes sense (at night when you're sleeping anyway), but it's quite limited. You get messages via email at odd times.
Actually, on iOS you either have to say goodbye to data protection or the idea that you are immediately informed about incoming emails. Basically, I don't think that's a problem anyway, since I don't need the notifications from emails. If someone urgently wants something from me, they should call or send a chat message!
That's a real shame, I haven't found a mailer that supports Markdown on iPhone. Admittedly, that doesn't really make sense with the small keyboard, but it makes sense on the iPad with a "real" keyboard connected.
Honestly - for 99% of users, Apple's own email solution on iOS is quite sufficient. It is well integrated into the system and retrieving emails also works reliably here, without credentials ending up on Apple's servers.
The search is also relatively limited here, you can actually only enter a text in a search field, which is then searched for more or less everywhere. If appropriate, suggestions are made, which may then go to certain fields (sender address, recipient, subject). But that's not particularly "powerful" of course.
Apple Mail on iOS is a solid email client that really just "works". Actually, the thing doesn't cause any problems, but it also doesn't offer a great many functions. If you are an email heavy user, this may not be enough and you would like to expand the feature set.
Boxer - Workspace ONE comes from VMWare and is a really good and nice looking email client. Basically, it differs only slightly from the standard mail app, but offers some functions to connect emails with the calendar. This is solved quite nicely, you have direct access to emails from the UI:
Another useful feature is the configurability of the swipe actions. I.e. you can specify which action should take place with a short or long swipe on an email.
With the wipe actions you can choose between the "usual suspects" (read/unread, delete, archive, move, spam, etc.). However, there are two special features: you can answer an email with the "Quick" action, then you can choose from ready-made, short answer texts. Another special feature is the possibility to click "Like" on an email. Your "agreement" will then be sent as an answer. To what extent this makes sense remains to be seen.
Unfortunately, an option called "Notebook" doesn't work if you haven't installed the appropriate app for it.
Another advantage is the direct support for Microsoft Exchange.
Overall a good email client, offers a lot of features and is a good alternative to Apple's own Mail.app. The app is free and really worth a look!
But: still too many bugs, e.g. exchange mails are not marked as read or the marking comes back?!?!? Similar errors also occur with normal IMAP accounts, but more rarely. Even more important: IT did change random emails to be unread! This is really not good.
Altamail is quite respectable, but the amount of options quickly overwhelms you. Similar to the MacOS-Version discussed above.
In general, Altamail offers so many settings and options that it would go beyond the scope here. You can adapt the appearance to your wishes, configure swipe gestures similar to other mailers, a connection to a calendar is also integrated and much more.
Altamail gives you the option to use the Altamail push service - or not. I find it very pleasant that you are not forced to disclose your email accounts in any way. But there are certainly people for whom this is not so important and would rather enjoy the convenience. As a user, I at least have a choice here. I like it!
Unfortunately, the app is quite expensive: âŹ0.99 per month, one year of use costs âŹ9.99 and a lifetime license is at âŹ49.99.
Altamail is the only email client for iOS that also supports more complex searches and even has its own spam filter. It is also possible to store your searches as a smart folder. That's worth some pluses!
Unfortunately, Altamail does not provide any encryption or GPG support.
Airmail is the iOS equivalent to the MacOS version described above and is one of the prettiest email clients from a purely visual point of view - Of course, this is also a question of your own taste.
However, the iOS version still has some disadvantages compared to the MacOS version: Another major shortcoming and unfortunately an absolute NoGO is that the IMAP credentials (username and password) are stored on THEIR servers in the USA for the push notification. In addition, the mails must (of course) also be read in order to be able to make a push notification with content. The advantage is that the mail arrives on the iPhone (or the notification about it) the second it arrives on the mail server. That's fine, but really not necessary for emails at all! if you turn off the push notifications, the credentials are not packed on the Airmail servers, but unfortunately no emails are retrieved either - only when you open the app. Unfortunately, none of this is really contemporary anymore. Maybe you want to give your passwords to other companies, that's ok. But I would like to have a choice. It's enough for me if I only get my emails every 15 minutes. It is not possible to poll the emails, i.e. to query them at certain time intervals (such as Apple Mail does). Of course, this is also due to the restrictions that Apple imposes on app developers. But I would have liked to see this polling that other app manufacturers offer here. That's really a shame, because I actually find all the features of Airmail very appealing. But I'm not allowed to use my company address in Airmail because of the problems mentioned above - and then Airmail is unfortunately unusable for me. Many others will probably have the same problem with business email addresses. Unfortunately too immature for the professional approach and not recommended "thanks" to the GDPR.
And currently you have to take out a subscription to be able to use all the functions.
I was hoping that with Airmail I had found a good mailer that would also serve reasonably well on the desktop with Airmail. Getting both from one "source" is really nice. But unfortunately, similar to the desktop version, the joy is marred by a few annoying bugs and shortcomings.
Also the positive: - the optics are great - the mail editor is easy to use and offers great formatting functions - if you use Airmail on several devices, you can have the account settings synchronized via the iCloud
What is really annoying is the basic functionality. I no longer even see some emails if they have already been read on another device, only when I manually press the sync button. This should actually happen automatically at least once an hour (at least according to the settings) - but it doesn't!
Then I have the problem again and again that the filter functions in the list view do not work properly. If you go to "Threads", all emails that belong together are grouped, but they are not sorted correctly in terms of time, so that my threads with unread emails end up "somewhere".
What exactly this "smart" sorting is supposed to do has not been revealed to me. Airmail is really full of nice ideas, which unfortunately (once again) were implemented more or less half-heartedly.
Too bad, because Airmail really has the potential to become something really good. Not a bargain, but really ok. If you don't mess around with multiple devices on your email accounts, the problems mentioned above shouldn't occur. What doesn't work at all these days is that encrypted emails that you might want to decrypt with another app cannot be decrypted with Airmail. Airmail is unfortunately not able to recognize the "encrypted.asc" attachment, which is created when encrypting with GnuPG, for example. So you can't send it to another app to decrypt it.
This is not just the equivalent of the MacOS version above in name. But Canary-Mail is one of the few iOS email clients that also supports encryption with on device PGP/GPG. That works relatively well.
The encryption and security features are great, no question. But unfortunately the app feels "unfinished". Always annoying bugs. For example, no emails were marked as read today. Or if you wanted to answer an email, the editor stays black and you can't see what you're typing. (I tried to use this App for about a year now, most of the bugs are still there)
The interaction with the Mac app is great, I really appreciate that.
By default, both apps send a code that sends back a read a "Email was read"-Notice. This is done via the server from the manufacturer. I would have liked to see that noted a little more clearly somewhere and not behind several clicks in the detailed terms and conditions.
All in all, I can't recommend the client on either iOS or Mac - especially not for the price! You're constantly being interrupted in your work because something doesn't work or Canary just can't do it (such as more complex searches). Unfortunately.
I've tried using Canary several times now, at intervals of several months. But again and again bugs are annoying and some things just don't work. Still not.
There are some apps trying to bring GPG or PGP to iOS. One of them is the app PGPro the whole thing is an open source implementation and can be obtained free of charge from the app store. This allows you to create and decrypt encrypted content relatively easily.
Unfortunately, the presentation is somewhat very technical. The content of a decrypted email looks something like this:
That's just hard to read. And it gets even worse when the message is also signed. Then the cryptographic signature "hangs" on the text.
It's a crutch, but unfortunately probably the only way to use GPG on iOS at the moment.
tool, similar to PGPro but has some more features and can directly send encrypted emails.
Encrypted attachments can be sent directly to iPGMail (similar to PGPro).
iPGmail does have a basic keyserver search implemented:
Tessercube is very much similar to PGPro and iPGMail. It is used in conjunction with some other email client to decrypt incoming mails or create an encrypted version of an outgoing mail.
This is as cumbersome as with the other two candidates, but works as expected and has a nicer look to it. The workflow is similar to all the GPG-Tools above: you select the encrypted file, you got via email (usually called encrypted.asc
) and try to open it. Then you can choose from the actions and apps... here choose tessercube and it opens here.
When adding a key or keypar, tessercube only takes keys from the Clipboard. That is a bummer, there is no keyserver integration whatsoever.
It is really hard to find an Email-Client that ticks all the boxes, you probably will have to make at least some compromises. On Mac I found an email client that ticks all by boxes, MailMate works great, has a good markdwon support, exceptionally great search features and does encryption like a charm. I use it for quite some time now, and it never let me down. It has one drawback, it does not support exchange accounts - you need to install DavMail to have access to them. But then, it works fine. For me, on OSX there is no other choice in a mail client. But for iOS, most of them do not really work great and most of them do not support GPG.
I think, Boxer on iOS is a really good alternative, but I did not get it to pull any mail automatically on iOS. But it marks some emails as "unread", randomly. Not only locally, but also on the server: the mails look unread in all mail clients.
Also, Altamail is a great email client for ios, the extended searches and extreme number of features is just mind blowing. Same for the MacOS-version, they work nice in conjunction. But in some cases the Settings are just way to overloaded. And it is an expensive mailtool
So, what do we lern now. It took me some weeks to actually have all those clients tested, and it was fun... sometimes frustrating. So for me it will be like this:
I am doing this evaluation of mail clients every once in a while, but everytime I finish bein disappointed - but it seems to get better. At least there is an OSX-Version that supports all features (MailMate) and works stable. So there is hope left, that at some point there will be a good implementation for iOS as well. For now - it is still missing âčïž
Having the plain text on serverside helps with fighting spam. So the server can examine the content of an email and decide on that, if it is spam or not. When mails are encrypted, things are more complex and spam filtering could only take place on client side - BUT: Sending individually encrypted emails as Spam would make it more and more unattractive to send Spam mails at all.
↩This explanation is somewhat simplified, but the facts remain the same: emails can be read by all admins of the server on the path that an email takes without any problems!
↩To my knowledge
↩category: global --> E-Mobility --> Tesla
2019-07-04 - Tags: Tesla E-MobilitÀt Model 3
originally posted on: https://boesebeck.name
I was self-employed for a long time and drove a lot of good and new cars. All were leased and all were new. I will create a blog post with a list, was also an exciting journey.
Since I am no longer working as a freelancer and therefore no longer have a business, I have always driven used cars, mostly petrol engine with large engines (because of the assumed better durability and the fun that brings for a petrol head ).
But in my current company (https://www.genios.de) there is a company car scheme, which I can access. Since my 5 Series BMW ist slowly falling apart and there would be some TĂV-relevant repairs in the future (certainly in the range of around 5000, - âŹ), it was time to at least think about a new car or even a company car.
So I started to look more closely at the topic. As is customary in Germany, the state keeps its hand in everything and has rules for everything to be observed. And it's not quite the right thing to do it blue-eyed. You try to go the best way and the cheapest.
For us it was important that we have a vehicle with which you can make excursions, go on vacation. And for this we need space - dog and child also want to go on vacation: smirk: D.h. a microcar is out of the question.
The costs, which we have now for the used, should not be exceeded if possible (and that was quite expensive with all the repairs, fuel, oil change, etc.).
And so I started the search. What do you do ...?
Sure, such a 5 BMW in "new" would have been chic. But unfortunately also very expensive. Why you have to deal more clearly with the costs ...
Or not. For one, it's not really that interesting for the company in my case that I'm mobile. I am not in sales, have no customer appointments. So that would be "only" an incentive. So you get a company car in this case not for "free", but you must take over the costs largely.
As with almost all companies in such a case, you can have the car leased by the company, but you have to pay the cost of your gross salary. So the company calculates, what costs the car in the month all in all is there and this is then deducted from the gross salary - depending on the negotiating skills may also less.
The salary conversion sounds ok for the first time, but is increasingly interesting, the more taxes you pay. Example:
Of course, these examples are not exact, solis etc are neglected here, just to clarify the background.
The above mentioned costs are joined by the so-called payment-in-kind. This means that you have to pay for the private use of the vehicle, taxes. And indeed, the amount of thepayment-in-kind is calculated on the gross salary and then just taxed. (for the sake of completeness it should be mentioned, that you can calculate the payment-in-kind with a logbook, but that only makes sense, if you have business trips!)
The monetary value is calculated as follows (as of 2019):
For electric vehicles, the whole then changes in the future, the gross list price is no longer halved from 2020, instead, depending on the size of the battery, the gross list price for the calculation is reduced. Per kWh 500, - ⏠set (I think), but max 10000, - âŹ. So if electric car, then soon!
So, since we also have some claims to such a vehicle, the "cheapest" were left out. With such a Dacia Duster a trip to South Tyrol would certainly be possible, but certainly not so funny: smirk:
Just because of the tax incentives an electric car is a very interesting alternative at the moment. That alone is of course no reason, there are other substantial reasons:
Since the costs for the company are also lower due to an electric car, my gross content conversion is also smaller. You save a little bit on a monthly basis.
Well, to put it in a nutshell: the German carmakers have unfortunately totally failed to bring usable electric cars on the market. With the existing vehicles you can cruise wonderfully within the city, but traveling is really difficult or near to impossible. Just take a look at how the E-Cannonball ran in 2018, and when the individual vehicles arrived in Munich after the trip from Hamburg. In addition, they do demand maintenance at fixed intervals. That should actually be unnecessary.
And although Tesla does not require mandatory maintenance, you get a full 4-year warranty on the vehicle with each Tesla. And 8 years on drive and battery! Who else offers such a thing?
Tesla has not only built a fancy car (or several), but above all, they understood that you need a simple, unified, large-scale and fast charging infrastructure. In Germany, the SuperCharger (ie the rapid charging stations of Tesla) are hardly more than 100km apart, so there is always a charging station in reach. Of course, the range of> 400km from the Teslas helps as well.
And thanks to the high efficiency of the Tesla's and the fast charging power of the charging stations, you can recharge your car so far within a reasonable time, then you can drive on. This makes a relaxed journey within Europe very possible.
With the other manufacturers, there is not this infrastructure, there are _many- providers for charging. Ionity to Telekom, everyone has a different approach. This is hard to see through, the costs are different everywhere, the calculation methods, the loading speeds. This makes planning in advance difficult to impossible.
In addition, you may still need some charge cards from the respective provider or use a charge station of a provider that offers roaming for my cards (which, of course, are additional costs). This is not only inscrutable, this is actually a knock-out criterion, if you want to go further away with his e-car - in my opinion!
There is gossip on the internet of a report of one who has bought an Audi E-Tron and if he wants to go on vacation, the car dealer provides him with a diesel for free ... because of the charging issues!
Well, the charging infrastructure is one of the main reasons why I use a Tesla. Although we have to admit, that things tend to get better for others.
I started researching in March 2019 to find out what is the best way for us. Even finance a car, again a used car, a company car etc.
Then you come to a first conclusion, the company electric car fits actually the most likely. However, I am the first person in the company who wanted to have an e-car, i. E. there is still no process for it. What would be the costs for the company, what would have to be paid by me ...
And one of the most important questions: how to charge it! That is also an important point. In garages, it's not that easy to set up a charging point. There must still all owners agree in unison. This almost never works. At my home, e.g. no chance. They even refuse, if the garage should be swept ...
To charge at work would be great. The SWM has a funding for the charging of electric vehicles and the development of charging infrastructure. The package, which was laced by the public utilities, is really interesting. Actually, there is no reason even for the owners of garages to reject that. Basically, the owner has nothing to do. The SWM take care of connection, maintenance, setup etc., guarantee that it has no influence on the other power connections etc.
Nevertheless, you have to charge at home - there are also holidays etc ... For me it is not sooo easy. As I said, to get a power outlet in the garage is virtually impossible. So the Tesla is not charged in the garage, but must be charged in front of the house (public parking - not my own parking space). There I have to attach now a power outlet CEE16 and there I can then connect a wallbox.
A "charger" is with the Tesla yes, but this charger can only max 1 phase charge (the Model 3 that is, model s used to have a 3 phase charger). That you get max 3.7kW with it, if you can plug it into a 16A outlet.
That 's not all that much, so to charge a Tesla Model 3 with a battery capacity of about 75kWh from 10% to 80% (you should not charge every day to 100%, and 80% in my case is about 400km) about 15h. If you could do it now with a 3-phase charger, you would only get to about 5h ...
And for the sake of completeness: you can also load the Tesla on a normal power outlet. But you should limit the current to 10A max. Then you'll get 2.4kW charging power (in the most favorable case.) For me, the voltage dropped to just under 200V in this case). And so the Tesla needs from 10% to 80% in about 22h. Realistically more like 30h.
At the moment I'm thinking about taking a wallbox from stark-in-strom.de, which are quite cheap to buy and have everything you need (including the required RCCB and fuses). And if you ask them, they will add the the length of the charging cable you want. So I have 12m Charging cable now, although on the website 5m was maximum. But that is not the best solution. The parking space is occupied often, so I cannot charge the car. And during winter when there is snow on the streets, the snow-plowing service put the snow - exactly: to that specific parking place. So, for Winter I need another solution. Up until then I am fine.
These costs should be clarified then again with the employer, especially mobile solutions are certainly something that remains in the car and actually belongs to the car ...: smirk:
After a lot of back and forth a lot together with the managing director of [GBI Genios] (https://www.genios.de) the decision to bet on the Tesla and to try it out. I was, so to speak, the test balloon in this case ...
So, first, i contacted some leasing companies. Most had the Tesla Model 3 not on offer and if, then for some lunar prices (leasing rates of> 1200 ⏠were not uncommon).
At some point I came to [Kazenmaier] (https://www.kazenmaier.de), which had a really interesting KM lease offer for the Model 3.
So, with the offer, the discussions continued and then ... Tesla changed the prices. Starting over...
the second offer comes in, also ok, and again Tesla changes the prices again. In the period from April to the end of May Tesla has changed prices about 4x.
During this time, I also contacted Telsa and asked when such a vehicle would be delivered. "It takes about 3 - 6 months," they said ... When we did the test drive, it was said that it would be "safe 3 months" ... well ...
Kazenmaier offers the lease but after deduction of all subsidies, i. The leasing rate is cheaper, which is definitely interesting for a company car. However, the subsidies are dependent on the gross list price and must be adjusted every time. That I had to wait from the beginning of April until pretty much the end of May, until we finally got the order.
It was somehow "different" than usual. We sent the documents to the lessor, everything was OK then. they ordered the "vehicle". Then Tesla called me and said "so, we can now perform the order together" .... HĂ€ ??!?!?
On the phone we have the configured and ordered vehicle yet again. It turned out that the prices changed again. Great. We ordered the vehicle nevertheless. The leasing company has then promptly sent a new contract, but the lease has left the same ... back and forth ...
Then I was also told that "Tesla it is not able to register the vehicle due to the high order volume" - you have to do it yourself. awesome... NOT!
When ordering it was said, the vehicle would be here in "probably 6 weeks, but personally I think it comes in 3".
Well, that's cool, delivery time shortened to a sixth.
6 days later I get a call from Tesla, I could pick up my vehicle ... but it is in Nuremberg, but Tesla pays taxi and train tickets ...
Oh great ... then we'll go to Nuremberg. The conversation was on Wednesday, Thursday was Corpus Christi. The lady on the phone thinks I could pick up the car on Monday.
Of course that was not possible, I had no papers. And with that, I found myself in the Tesla universe, things are going quite differently than in the rest of the world ...
The documents for the approval were not sent on Friday ... on Monday, I try desperately to reach someone else, to make it still work. After some attempts, I reach someone in the evening and they say: "oops, the papers went to Karlsruhe" - To The Leasing Company. We really chewed that 10x that the stuff has to come to me, so that the registration could work ...
The leasing company received the documents on Monday, sent them back to me at the expense of Tesla via Overnight Express, and on Wednesday morning I got the papers. The appointment for the registration was the same day ...
That had then finally worked. E-plate and fine dust sticker taken, car is registered.
On Friday we went by train to Nuremberg, there by taxi to Tesla. At the Tesla Delivery Center, everything was really nice, the staff there were accommodating (though a bit foolish: "I can not get to your car right now, because the colleague is gone with the key")
At the handover, we complained a few flaws on the paint and then we drove off. And that was really great .... but more about that in a next blog.
2019-02-26 - Tags: Apple MacMini OSX
originally posted on: https://boesebeck.name
I am a Mac user for quite some time now and I was always happy this way. Apple managed it to deliver a combination from operating system and hardware that is stable, secure and easy to use (even for non IT-guys) but also has a lot of features for power users.
(i already described my IT-history at another place https://boesebeck.name/v/2013/4/19/meine_it_geschichte)
My iMac which I used for quite some time (since beginning of 2011) did die in a rather spectacular way (for a Mac): it just did a little "zing" and it was off. Could not switch it back on again, broken beyond repair... :frown:
So, I needed some new Hardware. Apple unfortunately missed the opportunity at the last hardware event end of 2018 to add newer hardware to the current iMacs. There is still an more then 2 year old cpu working. Not really "current" tech, but quite expensive.
The pricing of Apple products is definitely something you could discuss about. The hardware prices were increased almost for everything, same as with the costs for new iPhones. This is kind of outrageous...
In this context, the new MacMini is a very compelling alternative. The "mini Mac" always was the entry level mac and was the cheapest one in the line up. Well, you need to have your own keyboard, mouse and screen.
now, the MacMini got finally some love from Apple. The update is quite good: recent CPU, a lot of useful ports (and fast: 4x Thunderbolt-3, 2x USB-A 3.0, HDMI). This is the Mac for people, who want a fast desktop, but do not want to pay 5000⏠for an iMac Pro.
I was a bit put off by the MacMini at first, because it does not have a real GPU. Well, there is one form Intel - but you could hardly name it a Graphics Processing Unit.
That always was the problem with the MiniMac - if you want to use it as Server, fine. (I have one to backup the photos library) But as Media-PC? or even a gaming machine? No way.... as soon as decent graphics is involved, the MacMini failed.
But with thunderbolt 3 you can now solve this "problem" using an eGPU (external graphics card). How should that work? External is always slower than internal, right?
Well, not always. Thunderbolt 3 is capable of delivering up to 40GBit/s transfer speed and current GPUs only need 32GBit/S (PCI-express x16). This sounds quite ok... (although there is some overhead in the communication)
But it is quite ok. I bought the MacMini with an external eGPU and I am astonished, how much power this little machine has. All the connectors, cables, dongles etc do not look as good as the good old iMac. And the best thing: if you want to upgrade your eGPU, because there is a better one fine... or upgrade the mac mini and keep the eGPU - flexibility increase!
Of course, my 8 year old iMac cannot keep up with the current MacMini, that would be an unfair comparison. But I have to admit that the 2011 iMac was a lot quicker when it comes to graphics performance. So for gaming the Mini is not the right choice.
The built in Harddisk, of course, is a SSD. Unfortunately it is soldered fix and cannot be replaced. But it is blazingly fast and does read/write with up to 2000MB/sec.
If I take a look at my GeekBench results of the Mini, the single core benchmark is similar to the curren iMac Pro with a Xeon processor. That is truly implressive. But, of course, in the multicore benchmark the mini can't keep up - it just has not enough cores to compete with a 8-Core machine - I have the "bigger" MacMini with the current generation i7 CPU.
I plugged in (or better on) an external Vega64 eGPU. This way I could compare the Graphivs performace with other current machines using the Unigine benchmarks. In those benchmarks, my Mini has about the same speed as an iMac Pro with the Vega64. This is astonishing.
Well, how much does all this performance cost? Is it cheaper than a good speced iMac 27"?
The calculation is relatively simple. To get something comparable in an iMac you need to take the i7 Processor - although this one is about 2 generations behind. As an SSD-Storage, 128GB is probably not enough, 512 sounds more reasonable. Anything else can be attached over Thunderbolt-3. A Samsung X5 SSD connected via Thunderbolt-3 is even faster than an internal SSD - so no drawback here.
You should increase the memory yourself, as Apple is very expensive. This way an upgrade to 32GB is about 300⏠- Apple charges 700âŹ!
But for comparison the RAM is not important, as with the iMAc I would do it exactly same. So lets put that together. Right now, an eGPU case is about 400âŹ, than a Vega64, also about 400âŹ, the MacMini is about 1489,- ⏠plus 250⏠for a screen (LG 4k,works great), and additional 100⏠for Mouse and Keyboard. All in all you end up with 2539,- +/- 200âŹ!
Just for comparison: the iMac would cose about 2839,- ⏠- but in this configuration it would be slower than the Mini. With a Vega64 and a comparable CPU the mini in this configuration is more comparable to the base model of the iMac pro, which is 5499,-⏠(but still has a slower GPU!).
The new MacMini is definitely worth a thought. Considering the costs in comparison to other Macs, especially when you do not have to buy everything at once (like buy the MacMini, 3Monts later the RAM upgrade, 3 Months later eh eGPU case and again later the GFX-Card). The biggest disadvantage of the Mini is, that you now have more cables on your desk compared with the iMac...
I do have the Mini now running for some months and I love it! If you need a desktop, the MacMini is worth a try! Even compared with a MacBook!
category: global
2018-07-17 - Tags:
originally posted on: https://boesebeck.name
there is no english version of this text available
Anm.: Dieser Text wurde zur VerfĂŒgung gestellt von homepage-erstellen.de
Hat man erst einmal die ersten HĂŒrden bei der Erstellung einer Homepage gemeistert, muss man sich um ein passendes Layout kĂŒmmern. Ein ĂŒbersichtliches und ansprechendes Layout sorgt dafĂŒr, dass relevante Inhalte leichter gefunden werden können und Seitenbesucher eher zurĂŒckkehren.
Beim Layout einer Homepage gilt es zunĂ€chst darauf zu achten, welchem Zweck die Homepage dienen soll. Soll ein Produkt vorgestellt werden? Möchte man ĂŒber die Dienstleistung einer Firma informieren? Oder nutzt man die Homepage, um ĂŒber ein persönliches Anliegen aufzuklĂ€ren? Wichtig ist, dass alle relevanten Informationen jederzeit gefunden werden können. Ein gutes Layout besteht aus Ăberschriften, Bildern, FuĂzeilen und Spalten. So werden Informationen sinnvoll vorgefiltert und können schon mit wenigen Blicken erfasst werden. Das erhöht fĂŒr den Besucher der Seite den Bedienkomfort und die Wahrscheinlichkeit, dass man zu einem spĂ€teren Zeitpunkt die Seite nochmal aufsuchen wird. Zuerst werden beim Layout Farben und Formen wahrgenommen. Ein farbenfrohes Layout kann sich zum Beispiel fĂŒr ein Portfolio eignen, das KreativitĂ€t ausdrĂŒcken soll, passt aber kaum zu bestimmten Firmen oder Dienstleistern. Bei diesen ist es wichtig, dass man die Informationen zu jedem Produkt sofort finden kann.
Laut www.homepage-erstellen.de kann sich eine Seitenleiste als sehr nĂŒtzlich fĂŒr Besucher der Seite erweisen. Dort sollten aber nicht die wichtigsten Inhalte, sondern hauptsĂ€chlich ergĂ€nzende Informationen zusammengefasst werden. Die Ausrichtung spielt dabei keine groĂe Rolle und die Seitenleiste kann sowohl auf der rechten als auch auf der linken Seite angebracht werden. In der oberen linken Ecke sollte sich ein Logo befinden. Bei E-Commerce-Seiten ist der Warenkorb meist in der rechten Ecke angebracht. Das Suchfeld befindet sich oftmals direkt neben dem oder in direkter NĂ€he zum Warenkorb.
category: Computer --> programming --> MongoDB --> morphium
2018-05-20 - Tags: java mongodb morphium cache
originally posted on: https://caluga.de
since the first version of Morphium it provided an internal cache for all Entities maked with the annotation Cache
. This cache was configured mainly via those annotations.
This cache has proven its usefulness in countless projects and the synchronizing of caches in clustered environments via Morphium Messaging is working flawlessly.
But there are new and more sophisticated Cache Implementations out there . It would not be clever to built all those features also into morphium, better leverage on those projects. So we decided to include JCache-Support (JSR107) into morphium.
Of course, we had to adapt some things here and there, especially the MorphiumCahce
-Interface needed to be overhauled.
Morphium itself always had the option to use an own MorphiumCache Implementation. But this was not always easy to achieve in own projects. Hence we use that now in order to be able to offer both the old, proven implementation and the new, future-implementation.
As always, morphium can be used out of the box, so we implemented a JCAche-Version of our cache as well into morphium.
With the upcoming V3.2.2BETA1 (via maven central oder auf github ) morphium will use the JCache compatible implementation. If you want to switch back to the old, proved Version of caching, you just need to change the config:
if you create your MorphiumConfig
via properties or via JSon, you need to set the class name accordingly:
cacheClassName=de.caluga.morphium.cache.MorphiumCacheImpl
If you leave all those settings to default, the JCache API is being used. By Default the cache creates the cache manager using Caching.getCachingProvider().getCacheManager()
. This way you get the default of the default
If you want to configure the cache on your own (ehcache properties for example), you just need to pass on the CacheManager to the morphiumCache:
of course in this example, there are no additional options set, but I think you see, how that might work.
BTW: the morphium internal JCache implementation can be used via JCache API in your application also, if you want to. Just add the system setting -Djavax.cache.spi.CachingProvider=de.caluga.morphium.cache.jcache.CachingProviderImpl
and with Caching.getCachingProvider()
you will get the Morphium Implementation of the cache.
Attention All JCache implementation support expiration of oldest / least recently used entries in cache. Unfortunately the policy of morphium is a bit more complex (especially regarding the number of entries), so moprhium implements an own JCache-Housekeeping for now.
Additional Info: Whatever Cache Implementation you use, you might still use the CacheSynchronizer in order to synchronize caches. And this synchronization should be working via Mongo even if you are not storing any Entities using the cache as an application cache!
<groupId>de.caluga</groupId>
<artifactId>morphium</artifactId>
<version>3.2.2BETA1</version>
There are some minor known bugs in the current Beta, you might want to know:
category: Computer --> programming --> MongoDB --> morphium
2018-05-06 - Tags: java programming morphium
originally posted on: https://caluga.de
One of the many advantages of Morphium is the integrate messaging system. This is used for synchronizing the caches in a clustered environment for example. It is part of Morphium for quite some time, it was introduced with one of the first releases.
Messaging uses a sophisticated locking mechanism in order to ensure that messages for one recipient will only get there. Unfortunately this is usually being solved using polling, which means querying the db every now and then. Since Morphium V3.2.0 we can use the OplogMonitor for Messaging. This creates kind of a "Push" for new messages, which means that the DB informs clients about incoming messages.
This reduces load and increases speed. Lets have a look how that works...
As mentioned above with V3.2.0 we need to destinguish 2 cases: are we connected to a replicaset (only then there is an oplog the listener could listen to) or not.
No replicaset is also true, if you are connected to a sharded cluster via MongoS. Here also messaging uses polling to get the data. Of course, this can be configured. Like, how long should the system pause between polls, should messages be processed one by one or in a bulk...
All comes down to the locking. The algorithm looks like this (you can have a closer look at Messaging.java
for mor details):
The OplogMonitor is part of Morphium for quite a while now. It uses a TailableCursor
on the oplog to get informed about changes. A tailable cursor will stay open, even if thera are no more matching documents. But will send all incoming documents to the client. So the client gets informed about all changes in the system.
With morphium 4.0 we use the change stream instead the oplog to get polling of messages done. This is working as efficient, but does not need admin access.
So why not use a TailableCursor directly on the Msg-Collection then? for several reasons:
Messaging based on the OplogMonitor looks quite similar to the algorithm above, but the polling simplifies things a bit. on new messages, this happens:
usually, when an update on messages comes in, nothing interesting happens. But for being able to reject messages (see below) we just start the locking mechanism to be sure.
Well, that is quite simple. Kust create an instance of Messaging
and hit start
.
of course, you could instanciate it using spring or something.
to send a message, just do:
this message here does have a ttl (time to live) of 5 secs. The default ttl is 30secs. Older messages will automatically be deleted by mongo.
Messages are broadcast messages by default, meaning, all listeners may process it. if you set the message to be exclusive, only one of the listeners gets the permission to process ist (see locking above).
this message will only be processed by one recipient!
And the sender does not read his own messages!
Of course, you can send a message directly to a specifiy recipient. This happens automatically when sending answers for example. To send a message to a specific recipient you need to know his UUID. You can get that by messages being sent (sender for example) or you implement some kind of discovery...
in the integration tests of Morphium both methods are being used. The difference is quite simple: storeMessage
stores the message directly do mongodb whereas queueMessage
works asynchronously - which might be the better choice when it comes to performance.
just register a Message listener to the messaging system:
here, messaging
is the messaging system and message
the message that was sent. This listener returns null
, but it could also return a Message, that should be send back as an answer to the sender.
Using the messaging
-object, the listener can also publish own messages, which should not be answers or something.
in addition to that, the listener may "reject" a Message by sending a MessageRejectedException
- then the message is unlocked so that all clients might use it again (if it was not sent directly to me).
Within Morphium the CacheSynchronizer
uses Messaging. It needs a messaging system in the constructor.
The implementation of it is not that complicated. The CacheSynchronizer just registers as MorphiumStorageListener
, so that it gets informed about all writing accesses (and only then caches need to be syncrhonized).
on write access, it checks if a cached entity is affected and if so, a ClearCache
message is send using messaging. This message also contains the strategy to use (like, clear whole cache, update the element and so on).
Of course, incoming messages also have to be processed by the CacheSynchronizer. But that is quite simple: if a message comes in, erase the coresponding cache mentioned in the message according to the strategy.
And you might send those clear messages manually by accessing the CacheSynchronizer directly.
And we should mention, that there you could be informed about all cache sync activities using a specific listener interface.
the messaging feature of morphium is not well known yet. But it might be used as a simple replacement for full-blown messaging systems and with the new OplogMonitor
-Feature it is even better than it ever was.
category: Computer --> programming --> MongoDB --> morphium
2018-05-02 - Tags: morphium java mongodb mongo POJO
originally posted on: https://caluga.de
Hi,
a new pre-release of morphium is available now V3.2.0Beta2. This one includes a lot of minor bugfixes and one big feature: Messaging now uses the Oplogmonitor when connected to a Replicaset. This means, no polling anymore and the system gets informed via kind of push!
This is also used for cache synchronization.
Release can be downloaded here: https://github.com/sboesebeck/morphium
The beta is also available on maven central.
This is still Beta, but will be released soon - so stay tuned
category: Computer --> programming --> MongoDB --> morphium
2017-11-21 - Tags: java mongo
originally posted on: https://caluga.de
We just released V3.1.7 of morphium - caching mongodb pojo layer.
Details can be found at the project page on github. You can easily add it to your projects using maven:
<dependency>
<groupId>de.caluga</groupId>
<artifactId>morphium</artifactId>
<version>3.1.7</version>
</dependency>
category: Computer --> programming --> MongoDB --> morphium
2017-09-29 - Tags: morphium
originally posted on: https://caluga.de
This release is about tidying up things a bit and includes some minor fixes and tweaks.
Available via Maven Central
category: global
2017-08-14 - Tags: scrum
... that is not the Question!
This is exactly the Problem, most do not understand the agile methodology described in the Agile Manifesto. Now lets have a closer look at that...
disclaimer: I am a software dev, team lead and head of IT and not a scrum evangelist or expert. All I write here is based on experience
there was the maybe good, but definitely old waterfall model. Most people should know that by know. In short, you cut your project in 2 pieces: 1st conceptional phase, then implementing whats in the concept. That was copied from other areas like constructing a house. There you have a plan first, then you build it.
That is totally legit and does have its right to exist in engineering as well: you write a fine granular concept, test everything hundreds of times in theory before the first line of code is written.
And if you follow that principle, you end up with software Development processes used by organizations like NASA. And there it needs to be like that, as you cannot just fly up to mars to replace something on the rover that was not considered during conceptional phase.
But nowadays all do this agile methods
, which is totally the shit...
you can quadruple your productivity with that, without doing anything, just like that fingersnip
"That is what we're going to do now!"
With that mindset some start introducing agile development in the IT department. Or even worse: in the whole company.
The enthusiasm for Scrum is fine, but that alone is not enough. Its not enough to have a scrum coach for a couple of weeks in the company, read a book. You might think you know Agile management processes now - but you don't!
That is doomed to fail, saw that several times now.
Scrum is a set of methods and tools that were helping some teams to increase productivity and be successful.
But you need to have a closer look at those stories. When you look at where these teams started from, it is not hard to increase productivity a lot
A lot of departments and areas do already work in an agile fassion nowadays, often without actually knowing it. For example, support teams often use tools from 'Kanban' in order to have the support ticket processing a bit more structured and processes well defined in an area where you do not know what will happen in the next 15 minutes. Very often those tools or processes are not named Kanban or alike, although the whole department needs to be agile.
So, if you want to improve something, have a close look at the real proïŹem.
And that means, that the team does not have to implement everything ever written in a scrum book or what is mentioned in the scrum guides. These are good examples, scaffolds so to say. You need to decide, what is best for your team, what will work, what will probably not work. This is strongly related to the company culture!
Some of the Scrum-Nazis (thatâs what I call people, who do scrum literaly out of the book with out being flexible at any point) will scream out loud now. But from my experience, this is true. That is actually also enforced by the "inventors" of scrum. I did a PO Certification end of 2016 with Jeff Sutherland, the only inventor of Scrum (his own words). In the company I worked for at that time, there was a Scrum consultant, who also was trained by Jefff, to help us introduce scrum. End even those two differ in tools, or statements. The consultant used Tools, never mentioned by Jeff and vice versa. Also some of the Statements in reference to our team did contradict the Statements Jeff did in the training.
And that is the beauty of it. the consulten, who has a close look at the circumstances can do better decisions then any "scientist". The trainer can only give overall advice.
But that shows, that Scrum is not written in stone, adapt it to your own needs, your team, your culture. You should only use these tools and concepts of this methodology, you feel good with. And the team also needs to feel fine for that.
There are a lot of things influencing what tool and methodology would work with your team. One of the most important ones is: Trust! Trust from the management in the team and vice versa!
If the team does not trust the management, it will not work (well, nothing will really, but scrum especially not).
Trust is very important. Unlike the old methodologies, scrum tries to empower the team, reverse the trust. But empowering also brings responsibility and trust.
In waterfall, the Project Manager would tell, when what will be implemented. In scrum the team tells, what will be implemented. And the order of those things is determined by the prioritization the PO created. But the team will also be held accountable for what it promised. Thatâs the beauty of it...
But if the team does not trust the management will to the right things for the project and the team, will not interfere with ceremonies and stuff, then this will definitely cause conflicts. And after a while, we end up with "pseudo"-Waterfall.
The team seems to be in charge of what is done when, but if that differs from the opinion of the management, you start endless discussions until one of them - the engineering or the management - will indulge.
Example:
at the refinement the CEO is taking part (which is wrong on its own, as this is one of the POs meetings), the team is working on a user story. Lets say, there is an architectural decision to be made as part of the story. There are 3 options, A, B and C. The team is going for A.
The CEO answers this with "hmm... yes, good Idea, but we should think that through"
The team things, B is a valid solution, but never C! But C is the favorite way of the CEO.
So, the CEO keeps saying things like "yes, good Idea - but we should go on a bit"
Eventually somebody will say something like "but then only C is left!". The CEO in that case will jump on it "cool, C, that is what we do"
Later, the Devs are right, things will explode. The CEO comes in and asks what happened. the Devs state, that it is due to option C instead of A... the answer of the CEO is "But you wanted C, so fix it"
You think that is far fetched? Really?
No - something similar happened to me and my team, I was there! of course, this is a bit overstated - but it was similar to what i wrote here.
So, where does something like that come from? Lack of trust, not willing to let go and pass on responsibility! and a weak scrum master role. Not the person, but the role was not lived to what it should be. The scrum master was not allowed to do the things he needed to do!
In that case, it was also combined with a very strange error culture. Although the management stated not to be interested in errors, they were discussed over and over again. Especially when decisions had to be made. But in exactly those moments, at least once the management stated, that "we have a great error culture... BUT..."
So, the team ends up not wanting to do any decision anymore. If it is my fault anyway, and I cannot decide also. So... as long as i am there, just sit it through.
And this kills every agile process. From the outside you "look" agile, but internally there is some mixture of Waterfall and... well... what? Monarchy?
A lot of things you do in Scrum (or other agile methodology) tries to optimize transparence in all areas. If done correctly, scrum will help identify problems and show them in all their beauty to everyone, who dares to look. Also it will help identify ways of improvement and show that an improvement did work - or not.
In the upper example, in the retro those problems were discussed very often. But the Scrum Master could not change the ways the management worked. So it did not change... and at some point, people stoped complaining about it.
Of course, scrum or agile methodologies do not only help in those complex and conflict situations. Also the normal day to day work will be more effective if everybody knows everything he needs to know to do the job best. If you know, where the journey is headed, you can do a better job.
But that is exaclty the problem: a lot of managers do not feel comfortable to "let go", not be on the helm anymore, to let the team do the things they were hired for. This is cause of a lot of conflicts and in the end, there is mutual "un-trusting" between team and management.
Of course, the transparency scrum offers is not everybodys cup of tea. Even in engineering not everybody does want to have this kind of responsibility. And for those, the good ol' waterfall method is better fitting. But there you only have "programmers" not "engineers" again a bit exaggerated, but I thing you know where I am going: some people like to im plement a fine granular concept, rather than doing something that is not as clearly defined.
Especially in Softwareengineering there are some "Leads", who do work agile with their team, at least it looks like, but the do not act agile. Those are usually quite experienced people, good engineers. And the end up doing everything themself. These "heros" are a big problem in agile teams. They can break everything. And if they are or act as teamlead, you and up with a team of "a lot of drudges and on head". This is frustrating as the team as a whole will not evolve, will not develop. And the team will be only as fast, as the one guy - more or less. Scalability? No way!
For this "Hero" the situation usually is as unpleasant as for the team. As he is doing everything on his own, or wants at least to know everything in details, he usually ends up with a lot of overtime, and the others just sit there nosepicking...
Scrum should show something like that, but often these Leads also define the Scrum they want to do. And those methods and tools that would help showing this, are - for whatever reason - not in place or will just not be used...
Agile methods and tools will also help in this case, but everything needs to fit.
Agile development works great in engineering teams, as this is the "natural" way devs would organize their work in an own (opensource) project. You would do this iteartive approach to the optimum. But how do you talk to the other teams?
Excaclty there is the problem: I saw that this Interface between the departments did work extraordinary well, although the whole company was not agile, only the engineering used scrum. I remember working for a consulting company where it worked that way. The Dev-Teams were working agile, the Managers and the Project Managers were not.
This worked astonishingly well, although there was a fraction in methodology
Of course, that is not always the case. If the management wants you to report and plan like waterfall, you will have a tough time working agile.
That happened in another company. They heard of "this scrum thing" and for test purposes they wanted to do a project in a agile fashion. At the end, this was an utter failure! The kommunication did not work, the expectations were totally different and at the end, there were a lot of "lessons learned"-Meetings to avoid a lawsuit.
If the management tries to be hip and wants to do scrum without knowing what that means - this is the worst thing, that could happen.
In those cases (have seen that twice in my career), the mangement tried to bend the scrum ceremonies to their gusto, for example to turn a scum of scrums into a Reporting.
Scrum in management can only work, if the management did understand scrum and have a lot of experience with it. And than they would probably not use Scrum for the management, but maybe some other agile methodology.
If that happens, you need a very strong unwavering scrum master. And Patience is also helpful...
This will otherwise end up in conflics: the CEO who understood scrum to 50% and could gain about 3 months of experience already (experience in like, he saw somebody do it) and the Scrum master just sees all his ceremonies fail, as they are misused by the ceo. If the Scrum Master does not have the standing against the CEO, things will not work.
That is a really tough call, and I really do not know one company which is agile in all aspects. Not to mention, using scrum in all departments (as if that would make any sense).
Agile methodology is totally awesome, especially in software engineering or you want to "build" something, you actually do not know exactly what it will turn out to be. So you will iteratively improve you solutions..
So, if in management you have that approach, like in the mangement does not know, what they want to achieve and iteratively try to get things done... this sounds a bit scary, doesn't it?
so, if you are in management, you have agile teams, why not take a look at Agile Management Methods
? This is not Scrum but still agile and maybe the right tool for the job...
Just remember, just because you have a hammer, not every problem is a nail!
Links:
http://billschofield.net/The-Insufficiency-of-Scrum/
category: Computer
2017-05-16 - Tags: java jblog security
originally posted on: https://boesebeck.name
I did complain about wordpress several times (for example here). I took that for an opportunity, to take on my software development skills and use a weekend or two to build a new blogging software. Well, th result is this wonderful (well... hop so) page here.
To stop all PHP fainbois from whyning, I do not like PHP very much, because I don't know it very much. Hence, wordpress is also kind of a mystery for me. The configuration works with luck, let alone get php to do what you want in a more secure way.
so, my blog was hacked several times during the last year now and this is pissing me off! So, I wanted to use a java based solution, but it seems like there is no simple, easy to use one out there.
exactly. That was my thought also. Could not be so complicated, could it? So, I wanted to create a blogging software that
jblog
- not rally creative) myself and it is not so complex as wordpress. So we should be ok. I guess. But I know for sure, that th standard wordpress exploits wont work no more!jblog
does only do 2 languages, German and English (I do not speak more, so I don't need more for my blogs).I am quite ok with what I accomplished here. Although it took longer than one weekend, it was finished quite fast. I lik that.
But please: if some links do not work anymore, some images look strange or are missing - I will fix this eventually
the private main blog. Will cover topics like hobby, drones, games, gadgets etc.
There I will put all my opnsource stuff, like morphium. And all the other programming tips and tricks I wrote over time. Hmm... seems like 'java blog' is not the right term...
This should be a business site anyways. So, here I will put in topics about my professional carreer, Scrum, processes etc.
well, this is going to be tough. I cannot produce content for 3 full blogs. Even filling one is quite hard. But I will try. And we will see, how that works
as mentioned above - not here, but at caluga.de
category: Computer
2015-06-12 - Tags: allgemein blog
originally posted on: https://boesebeck.name
no english version available yet
Das war stressig. Zum Umzug kam noch hinzu, dass mein Server die GrĂ€tsche gemacht hat. Ich musste neu installieren. Was ja â dank Backups â eigentlich kein allzu groĂer Aufwand wĂ€re, hĂ€tte ich nicht vergessen, ein Backup von der Datenbank zu machen⊠Deswegen jetzt der neue Start des alten Blogs ;-)
category: Computer --> programming --> MongoDB --> morphium
2014-09-05 - Tags: morphium java mongo
originally posted on: https://caluga.de
want help translating / documenting / coding? Conctact us on github or via slack
MorphiumConfig
Morphium started as a feature rich access layer and POJO mapper for MongoDB in java. It was built with speed and flexibility in mind. So it supported cluster aware caching out of the box, lazy loading references and much more. The POJO Mapping is the core of Morphium, all other features were built around that. It makes accessing MongoDB easy, supports all great features of MongoDB and adds some more.
But with time, the MongoDB based messaging became one of the most popular features in Morphium. It is fast, reliable, customisable and stable.
This document is a documentation for Morphium in the current (4.2) version. It would be best, if you had a basic understanding of MongoDB and a good knowledge on Morphium. If you want to know about MongoDB's features, that Morphium implements here, have a look at the official MongoDB pages and the documentation there.
This documentation covers all features Morphium has to offer.
Later in this document there are chapters about the POJO mapping, querying data and using the aggregation framework. Also a chapter about the InMemory driver, which is quite useful for testing. But let's start with the messaging subsystem first.
Morphium itself is simple to use, easy to customise to your needs and was built for high performance and
scalability. The messaging system is no different. It relies on the watch
functionality, that MongoDB
offers since V3.6 (you can also use messaging with older versions of MongoDB, but it will result in polling for new
messages). With that feature, the messages are pushed to all listeners. This makes it a very efficient
messaging system based on MongoDB.
There is a ton of messaging solutions out there. All of them have their advantages and offer lots of features. But only few of them offer the things that Morphium has:
There are people out there using Morphium and its messaging for production grade development. For example Genios.de uses Morphium messaging to power a microservice architecture with an enterprise message bus.
Morphium m=new Morphium();
Messaging messaging=new Messaging(m);
messaging.addMessageListener((messaging, msg) -> {
log.info("Got message!");
return null; //not sending an answer
});
This is a simple example of how to implement a message consumer. This consumer listens to all incoming messages, regardless of name.
Messages do have some fields, that you might want to use for your purpose. But you can create your own message type as well (see below). the Msg-Class defines those properties:
name
the name of the Message - you can define listeners only listening to messages of a specific
name using addListenerForMessageNamed
. Similar to a topic in other messaging systems
msg
: String messagevalue
: well - a String valuemapValue
: for more complex use cases where you need to send more informationadditional
: list value - used for more complex use casesprocessed_by
, in_answer_to
,
timestamp
etc.
So if you want to send a Message, that is also simple:
messaging.queueMessage(new Msg("name","A message","the value");
queueMessage is running asynchronously, which means, that the message is not directly stored. If you need
more speed and shorter reaction time, you should use sendMessage
instead (directly storing message to
mongo).
Morphium is able to answer any message for you. Your listener implementation only needs to return an
instance of the Msg
-Class(fn). This will then be sent back to the sender as an answer.
When sending a message, you also may wait for the incoming answer. The Messaging class offers a method for that purpose:
//new messaging instance with polling frequency of 100ms, not multithreaded
//polling only used in case of non-Replicaset connections and in some
//cases like unpausing to find pending messages
Messaging sender = new Messaging(_Morphium_, 100, false);
sender.start();
gotMessage1 = false;
gotMessage2 = false;
gotMessage3 = false;
gotMessage4 = false;
Messaging m1 = new Messaging(_Morphium_, 100, false);
m1.addMessageListener((msg, m) -> {
gotMessage1 = true;
return new Msg(m.getName(), "got message", "value", 5000);
});
m1.start();
Thread.sleep(2500);
Msg answer = sender.sendAndAwaitFirstAnswer(new Msg("test", "Sender", "sent", 15000), 15000);
assert (answer != null);
assert (answer.getName().equals("test"));
assert (answer.getInAnswerTo() != null);
assert (answer.getRecipient() != null);
assert (answer.getMsg().equals("got message"));
m1.terminate();
sender.terminate();
As the whole communication is asynchronous, you will have to specify a timeout after wich the wait for answer will be aborted with an exception. And, there might be more than one answers to the same message, hence you will only get the first one.
in the above example, the timeout for the answer is set to 15s (and the TTL for messages also).
As mentioned above, you can define your own Message-Class to be send back and forth. This class just needs to extend
the standard Msg
-Class. When adding a listener to messaging, you have the option to also use generics
to specify the Msg-Type you want to use.
Every message does have a priority field. That is used for giving queued messages precedence over others. The priority could be changed after a message is queued directly in MongoDB (or using Morphium).
But as the messaging is built on pushing of messages, when is the priority field used? Several cases:
In some cases it might be necessary to pause message processing for a time. That might be the case, if the message is triggering some long running task or so. If so, it would be good not to process any additional messages (at least of that type).
You can call messaging.pauseProcessingOfMessagesNamed
to not process any more messages of a
certain type.
Attention: if you have long running tasks triggered by messages, you should pause processing in the onMessage method and unpause it when finished.
When instantiating Messaging, you can specify two booleans:
processMultiple: this setting is only important in case of startup or unpausing(fn). If true, messaging will lock all(fn) messages available for this listener and process them one by one (or in parallel if multithreading is enabled).
These settings are influenced by other settings:messagingWindowSize
in MorphiumConfig or as constructor parameter / setter in Messaging: this
defines how many messages are marked for processing at once. Those might be processed in parallel (depending
whether processMultiple
is true, and the executor configuration, how many threads can be run in
parallel)
useChangeStream
in Messaging. Usually messaging determines by the cluster status, whether or not to
use the changestream or not. If in a cluster, use it, if not use polling. But if you explicitly want to use
polling, you can set this value to false
. The advantage here might be, that the messages are
processed by priority with every poll. This might be useful depending on your usecase. If this is set to false
(or you are connected to an single instance), the pause
configuration option (aka polling
frequency) in Messaging will determine how fast your messages can be consumed. Attention high
polling frequency (a low pause
value), will increase the load on MongoDB.
ThreadPoolMessagingCoreSize
in MorphiumConfig: If you define messaging to be multithreaded it will
spawn a new thread with each incoming message. this is the core size of the corresponding thread pool. If your
messaging instance is not configured for multithreading, this setting is not used.
ThreadPoolMessagingMaxSize
: max size of the thread pool. similar to above.ThreadPoolMessagingKeepAliveTime
: time of threads to live in ms
windowSize
of 100 and a ThreadPoolMessagingMaxSize
of 10, then there will be 100 messages in queue marked for
being processed by this specific messaging instance, but only 10 will be processed in parallel.
windowSize
determines how many messages are marked for
being processed, but are only processed one by one
multithreaded
set to true and processMultiple
set to false would result in
running each message processing in one separate thread, but only one at a time. This is very similar to having
multithreaded
and process multiple
both set to false.
When creating a Messaging instance, you can set a collection name to use. This could be compared to having a separate message queue in the system. Messages sent to one queue are not being registered by another.
Morphium messaging also implements the standard JMS-API to a certain extend and can be used this way. Please keep in mind that JMS does not support most of the features, Morphium messaging offers, and that the JMS implementation does not cover 100% of the JMS API yet:
@Test
public void basicSendReceiveTest() throws Exception {
JMSConnectionFactory factory = new JMSConnectionFactory(morphium);
JMSContext ctx1 = factory.createContext();
JMSContext ctx2 = factory.createContext();
JMSProducer pr1 = ctx1.createProducer();
Topic dest = new JMSTopic("test1");
JMSConsumer con = ctx2.createConsumer(dest);
con.setMessageListener(message -> log.info("Got Message!"));
Thread.sleep(1000);
pr1.send(dest, "A test");
ctx1.close();
ctx2.close();
}
@Test
public void synchronousSendRecieveTest() throws Exception {
JMSConnectionFactory factory = new JMSConnectionFactory(morphium);
JMSContext ctx1 = factory.createContext();
JMSContext ctx2 = factory.createContext();
JMSProducer pr1 = ctx1.createProducer();
Topic dest = new JMSTopic("test1");
JMSConsumer con = ctx2.createConsumer(dest);
final Map<String, Object> exchange = new ConcurrentHashMap<>();
Thread senderThread = new Thread(() -> {
JMSTextMessage message = new JMSTextMessage();
try {
message.setText("Test");
} catch (JMSException e) {
e.printStackTrace();
}
pr1.send(dest, message);
log.info("Sent out message");
exchange.put("sent", true);
});
Thread receiverThread = new Thread(() -> {
log.info("Receiving...");
Message msg = con.receive();
log.info("Got incoming message");
exchange.put("received", true);
});
receiverThread.start();
senderThread.start();
Thread.sleep(5000);
assert (exchange.get("sent") != null);
assert (exchange.get("received") != null);
}
Caveats:
The JMS Implementation uses the answering mechanism for acknowledging incoming messages. This makes JMS more or less half as fast as the direct usage of Morphium messaging.
Also, the implementation is very basic at the moment. A lot of methods lack implementation2. If you notice some missing functionality, just open an issue at github.
Because of the JMS Implementation being very basic at the moment, it should not be considered production ready!
Morphium m=new Morphium(config);
// create messaging instance with default settings, meaning
// no multithreading, windowSize of 100, processMultiple false
Messaging producer=new Messaging(m);
producer.queueMessage(new Msg("name","a message","a value"));
the receiver needs to connect to the same mongo and the same database:
Morphium m=new Morphium(config);
Messaging consumer=new Messaging(m);
consumer.start(); //needed for receiving messages
consumer.addMessageListener((messaging, msg) -> {
//Incoming message
System.out.println("Got a message of name "+msg.getName());
return null; //no answer to send back
});
you can also register listeners only for specific messages:consumer.start(); //needed for receiving messages
consumer.addListenerForMessageNamed("name",(messaging, msg) -> {
//Incoming message, is always named "name"
System.out.println("Got value: "+msg.getValue());
Msg answer=new Msg(msg.getName(),"answer","the answerValue");
return answer; //no answer to send back
});
Attention: the producer will only be able to process incoming messages, if start()
was
called!
The message sent there was a broadcast message. All registered listeners will receive that message and will process it!
In order to send a message directly to a specific messaging instance, you need to get the unique ID of it. This id is add as sender to any message.
Msg m=new Msg("Name","Message","value");
m.addRecipient(messaging1.getId());
//you could add more recipients if necessary
Background: This is used to send answers back to the sender. If you return a message instance in onMessage
,
this message will be sent directly back to the sender.
You can add as many recipients as needed, if no recipient is defined, the message by default is sent to all listeners.
Broadcast messages are fine for informing all listeners about something. But for some more complex scenarios, you would need a way to queue a message, and have only one listener process it - no matter which one (load balancing?)
Morphium supports this kind of messages, it is called "exclusive broadcast". This way, you can easily scale up by just adding listener instances.
Sending a exclusive broadcast message is simple:
Msg m=new Message("exclusive","The message","and value");
m.setExclusive(true);
messaging.queueMessage(m);
The listener only need to implement the standard onMessage
-Method to get this message. Due to some
sophisticated locking of messages, Morphium makes this message exclusive - which means, it is only
processed once!
Since Morphium V4.2 it is also possible to send an exclusive message to certain recipients3.
The behaviour is the same: the message will only be processed by one of the specified recipients, whereas it will be processed by all recipients, if not exclusive.
One main purpose of the InMemoryDriver
is to be able to do testing without having a MongoDB installed.
The InMemoryDriver adds the opportunity to let all MongoDB-code run in Memory, with a couple of exceptions
If you want to mock those things in testing, you need to:
aggregate()
for aggregation and return the properly
mocked data
@Test
public void mockAggregation() throws Exception{
MorphiumDriver original=morphium.getDriver();
morphium.setDriver(new InMemoryDriver(){
@Override
public List<Map<String, Object>> aggregate(String db, String collection, List<Map<String, Object>> pipeline, boolean explain, boolean allowDiskUse, Collation collation, ReadPreference readPreference) throws MorphiumDriverException {
return Arrays.asList(Utils.getMap("MockedData",123.0d));
}
});
Aggregator<UncachedObject, Map> agg = morphium.createAggregator(UncachedObject.class, Map.class);
//...
assert(agg.aggregate().get(0).get("MockedData").equals(123.0d)); //checking mocked data
morphium.getDriver().close();
morphium.setDriver(original);
}
you just need to set the Driver properly in your Morphium configuration.
MorphiumConfig cfg = new MorphiumConfig();
cfg.addHostToSeed("inMem");
cfg.setDatabase("test");
cfg.setDriverClass(InMemoryDriver.class.getName());
cfg.setReplicasetMonitoring(false);
morphium = new Morphium(cfg);
Of course, the InMemDriver does not need hosts to connect to, but for compatibility reasons, you need to add at least one host (although it will be ignored).
You can also set the Driver in the settings, e.g. in properties:
morphium.driverClass = "de.caluga.morphium.driver.inmem.InMemoryDriver"
After that initialisation you can use this Morphium instance as always, except that it will "persist" data only in Memory.
As in memory storage is by definition not lasting, it might be a good idea to store your data onto disk for later use. The InMemoryDriver does support that:
@Test
public void driverDumpTest() throws Exception {
for (int i = 0; i < 100; i++) {
UncachedObject e = new UncachedObject();
e.setCounter(i);
e.setValue("value" + i);
e.setIntData(new int[]{i, i + 1, i + 2});
e.setDval(42.00001);
e.setBinaryData(new byte[]{1, 2, 3, 4, 5});
morphium.store(e);
ComplexObject o = new ComplexObject();
o.setEinText("A text " + i);
o.setEmbed(new EmbeddedObject("emb", "v1", System.currentTimeMillis()));
o.setRef(e);
morphium.store(o);
}
ByteArrayOutputStream bout = new ByteArrayOutputStream();
InMemoryDriver driver = (InMemoryDriver) morphium.getDriver();
driver.dump(morphium, morphium.getDriver().listDatabases().get(0), bout);
log.info("database dump is " + bout.size());
driver.close();
driver.connect();
driver.restore(new ByteArrayInputStream(bout.toByteArray()));
assert (morphium.createQueryFor(UncachedObject.class).countAll() == 100);
assert (morphium.createQueryFor(ComplexObject.class).countAll() == 100);
for (ComplexObject co : morphium.createQueryFor(ComplexObject.class).asList()) {
assert (co.getEinText() != null);
assert (co.getRef() != null);
}
}
In this example, data is stored to a binary stream, which could also be stored to disk somewhere.
But you can also create a dump in JSON format, which makes it easier to edit and maybe to create from scratch:
@Test
public void jsonDumpTest() throws Exception {
MorphiumTypeMapper<ObjectId> mapper = new MorphiumTypeMapper<ObjectId>() {
@Override
public Object marshall(ObjectId o) {
Map<String, String> m = new HashMap<>();
m.put("value", o.toHexString());
m.put("class_name", o.getClass().getName());
return m;
}
@Override
public ObjectId unmarshall(Object d) {
return new ObjectId(((Map) d).get("value").toString());
}
};
morphium.getMapper().registerCustomMapperFor(ObjectId.class, mapper);
for (int i = 0; i < 10; i++) {
UncachedObject e = new UncachedObject();
e.setCounter(i);
e.setValue("value" + i);
morphium.store(e);
}
ExportContainer cnt = new ExportContainer();
cnt.created = System.currentTimeMillis();
cnt.data = ((InMemoryDriver) morphium.getDriver()).getDatabase(morphium.getDriver().listDatabases().get(0));
Map<String, Object> s = morphium.getMapper().serialize(cnt);
System.out.println(Utils.toJsonString(s));
morphium.dropCollection(UncachedObject.class);
ExportContainer ex = morphium.getMapper().deserialize(ExportContainer.class, Utils.toJsonString(s));
assert (ex != null);
((InMemoryDriver) morphium.getDriver()).setDatabase(morphium.getDriver().listDatabases().get(0), ex.data);
List<UncachedObject> result = morphium.createQueryFor(UncachedObject.class).asList();
assert (result.size() == 10);
assert (result.get(1).getCounter() == 1);
}
@Entity
public static class ExportContainer {
@Id
public Long created;
public Map<String, List<Map<String, Object>>> data;
}
The JSON output of this little dump looks like this:
{
"_id" : 1599853076411,
"data" : {
"uncached_object_0" : [
{
"_id" : {
"class_name" : "org.bson.types.ObjectId",
"value" : "5f5bd214f8fd82e792ef3b51"
},
"counter" : 0,
"dval" : 0,
"value" : "value0"
},
{
"_id" : {
"class_name" : "org.bson.types.ObjectId",
"value" : "5f5bd214f8fd82e792ef3b53"
},
"counter" : 1,
"dval" : 0,
"value" : "value1"
},
{
"_id" : {
"class_name" : "org.bson.types.ObjectId",
"value" : "5f5bd214f8fd82e792ef3b55"
},
"counter" : 2,
"dval" : 0,
"value" : "value2"
},
{
"_id" : {
"class_name" : "org.bson.types.ObjectId",
"value" : "5f5bd214f8fd82e792ef3b57"
},
"counter" : 3,
"dval" : 0,
"value" : "value3"
},
{
"_id" : {
"class_name" : "org.bson.types.ObjectId",
"value" : "5f5bd214f8fd82e792ef3b59"
},
"counter" : 4,
"dval" : 0,
"value" : "value4"
},
{
"_id" : {
"class_name" : "org.bson.types.ObjectId",
"value" : "5f5bd214f8fd82e792ef3b5b"
},
"counter" : 5,
"dval" : 0,
"value" : "value5"
},
{
"_id" : {
"class_name" : "org.bson.types.ObjectId",
"value" : "5f5bd214f8fd82e792ef3b5d"
},
"counter" : 6,
"dval" : 0,
"value" : "value6"
},
{
"_id" : {
"class_name" : "org.bson.types.ObjectId",
"value" : "5f5bd214f8fd82e792ef3b5f"
},
"counter" : 7,
"dval" : 0,
"value" : "value7"
},
{
"_id" : {
"class_name" : "org.bson.types.ObjectId",
"value" : "5f5bd214f8fd82e792ef3b61"
},
"counter" : 8,
"dval" : 0,
"value" : "value8"
},
{
"_id" : {
"class_name" : "org.bson.types.ObjectId",
"value" : "5f5bd214f8fd82e792ef3b63"
},
"counter" : 9,
"dval" : 0,
"value" : "value9"
}
]
}
}
In the early days of MongoDB there were not many POJO mapping libraries available. One was called morphia. Unfortunately we had a lot of problems adapting this to our needs.
Hence we built Morphium and we named it similar to morphia to show where the initial idea came from.
Morphium is built with flexibility, thread safety, performance and cluster awareness in mind.
MorphiumId
, wich is a drop in replacement for
ObjectId
.
Morphium is built to be very flexible and can be used in almost any environment. So the architecture needs to be flexible and sustainable at the same time. Hence it's possible to use your own implementation for the cache if you want to.
There are four major components of Morphium:
_Morphium_.createQueryFor(Class<T>
cls)
. With a Query, you can easily get data from database or have some things changed (update) and alike.
_Morphium_Writer
)
- writes directly to database, waiting for the response, the BufferedWriter (BufferedWriter
) - does
not write directly. All writes are stored in a buffer which is then processed as a bulk. The last type of writer
ist the asynchronous writer (AsyncWriter
) which is similar to the buffered one, but starts writing
immediately - only asynchronous. Morphium decides which writer to use depending on the
configuration and the annotations of the given Entities. But you can always use asynchronous calls just
by adding aAsyncCallback
implementation to your request.
Simple rule when using Morphium: You want to read -> Use the Query-Object. You want to write: Use the Morphium Object.
There are some additional features built upon this architecture:
BigInteger
instances to
MongoDB.
BufferedWriter
to a custom built one (in MorphiumConfig
). Also you could replace the object mapper with your own
implementation by implementing the ObjectMapper
interface and telling Morphium which class
to use instead. In short, these things can be changed in Morphium / MorphiumConfig:
Morphium is capable of mapping standard Java objects (POJOs - plain old java objects) to MongoDB documents and back. This should make it possible to seemlessly integrate MongoDB into your application.
When working with databases - not only NoSQL ones - you need to consider caching. Morphium integrates
transparent declarative caching by entity to your application, if needed. Just define your caching needs in the
@Cache
annotation.(fn)
The cache uses any JavaCache compatible cache implementation (like EHCache), but provides an own implementation if nothing is specified otherwise.
There are two kinds of caches: read cache and write cache.
Write cache:
The WriteCache is just a buffer, where all things to write will be stored and eventually stored to database. This is
done by adding the Annotation @WriteBuffer
to the class:
@Entity
@WriteBuffer(size = 150, strategy = WriteBuffer.STRATEGY.DEL_OLD)
public static class BufferedBySizeDelOldObject extends UncachedObject {
}
In this case, the buffer has a maximum of 150 entries, and if the buffer has reached that maximum, the oldest entries will just be deleted from buffer and hence NOT be written!
Possible strategies are:
WriteBuffer.STRATEGY.DEL_OLD
: delete oldest entries from buffer - use with cautionWriteBuffer.STRATEGY.IGNORE_NEW
: Do not write the new entry - just discard it. use with caution
WriteBuffer.STRATEGY.JUST_WARN
: just log a warning message, but store data anywayWriteBuffer.STRATEGY.WRITE_NEW
: write the new entry synchronously and wait for it to be finished
WriteBuffer.STRATEGY.WRITE_OLD
: write some old data NOW, wait for it to be finished, than queue new
entries
That's it - rest is 100% transparent - just call morphium.store(entity);
- the rest is done
automatically.
internally it uses the BufferedWriter
implementation, which can be changed, if needed (see configuration
options below). Also, some config settings exist for switching off the buffered writing altogether - comes in handy
when testing. have a closer look at the configuration options in MorphiumConfig
which refer to writeBuffer
or BufferedWriter
.
Read Cache
Read caches are defined on type level with the annotation @Cache. There you can specify, how your cache should operate:
@Cache(clearOnWrite = true, maxEntries = 20000, strategy = Cache.ClearStrategy.LRU, syncCache = Cache.SyncCacheStrategy.CLEAR_TYPE_CACHE, timeout = 5000)
@Entity
public class MyCachedEntity {
.....
}
here a cache is defined, which has a maximum of 20000 entries. Those Entries have a lifetime of 5 seconds (timeout=5000). Which means, no element will stay longer than 5sec in cache. The strategy defines, what should happen, when you read additional object, and the cache is full:
Cache.ClearStartegy.LRU
: remove least recently used elements from cacheCache.ClearStrategy.FIFO
:first in first out - depending time added to cacheCache.ClearStrategy.RANDOM
: just remove some random entries
clearOnWrite=true
set, the local cache will be erased any time you write an entity of this
typte to database. This prevents dirty reads. If set to false, you might end up with stale data (for as long as
the timeout value) but produce less stress on mongo and be probably a bit faster.
as mentioned above, caching is of utter importance in production grade applications. Usually, caching in a clustered
Environment is kind of a pain. As you need consider dirty reads and such. But Morphium caching works also
fine in a clustered environment. Just start (instantiate) a CacheSynchronizer
- and you're good to go!
There are two implementations of the cache synchronizer:
WatchingCacheSynchronizer
: uses mongodbs watch
- Feature to get informed about changes
in collections via push.
MessagingCacheSynchronizer
: uses messaging to inform cluster members about changes. This one has
the advantage that you can send messages manually or when other events occur
**Internals / Implementation details **
Morphium uses the cache based on the search query, sort options and collection overrides given. This means that there might be duplicate cache entries. In order to minimize the memory usage, Morphium also uses an ID-Cache. So all results are just added to this id cache and those ids are added as result to the query cache.
the Caches are organized per type. This means, if your entity is not marked with @Cache, queries to this type won't be cached, even if you override the collection name.It's a common problem, especially in clustered environments. How to synchronize caches on the different nodes. Morphium offers a simple solutions for it: On every write operation, a Message is stored in the Message queue (see MessagingSystem) and all nodes will clear the cache for the corresponding type (which will result in re-read of objects from mongo - keep that in mind if you plan to have a hundred hosts on your network) This is easy to use, does not cause a lot of overhead. Unfortunately it cannot be more efficient hence the Cache in Morphium is organized by searches.
the Morphium cache synchronizer does not issue messages for uncached entities or entities, where clearOnWrite is set to false.
Here is an example on how to use this:
Messaging m=new Messaging(morphium,10000,true);
MessagingCacheSynchronizer cs=new MessagingCacheSynchronizer(m,morphium);
Actually this is all there is to do, as the CacheSynchronizer registers itself to both Morphium and the messaging system.
Change since 1.4.0
Now the Caching is specified by every entity in the @Cache annotation using one Enum called SyncCacheStrategy. Possible Values are: NONE (Default), CLEAR_TYPE_CACHE (clear cache of all queries on change) and UPDATE_ENTRY (updates the entry itself), REMOVE_ENTRY_FROM_TYPE_CACHE (removes all entries from cache, containing this element)
enum SyncCacheStrategy {NONE, CLEAR_TYPE_CACHE, REMOVE_ENTRY_FROM_TYPE_CACHE, UPDATE_ENTRY}
UPDATE_ENTRY only works when updating records, not on drop or remove or update (like inc, set, push...). For example, if UPDATE_ENTRY is set, and you drop the collection, type cache will be cleared.
Attention: UPDATE_ENTRY will result in dirty reads, as the Item itself is updated, but not the corresponding searches!
Meaning: assume you have a Query result cached, where you have all Users listed which have a certain role:
Query<User> q=morphium.createQueryFor(User.class);
q=q.f("role").eq("Admin");
List<User> lst=q.asList();
Let's further assume you got 3 Users as a result. Now imagine, one node on your cluster changes the role of one of the users to something different than "Admin". If you have a list of users that might be changed while you use them! Careful with that! More importantly: your cache holds a copy of that list of users for a certain amount of time. During that time you will get a dirty read. Meaning: you will get objects that actually might not be part of your query or you will not get that actually might (not so bad actually).
Better use REMOVE_ENTRY_FROM_TYPE_CACHE in that case, as it will keep everything in cache except your search results containing the updated element. Might also cause a dirty read (as the newly added elements might not be added to your results) but it keeps findings more or less correct.
As all these synchronizations are done by sending messages via the Morphium own messaging system (which means storing messages in DB), you should really consider just disabling cache in case of heavy updates as a read from Mongo might actually be lots faster then sync of caches.
Keep that in mind!
Change since 1.3.07
Since 1.3.07 you need to add a autoSync=true to your cache annotation, in order to have things synced. It tuned out, that automatic syncing is not always the best solution. So, you can still manually sync your caches.
Manually Syncing the Caches
The sync in Morphium can be controlled totally manually (since 1.3.07), just send your own Clear-Cache Message using the corresponding method in CacheSynchronizer.
cs.sendClearMessage(CachedObject.class,"Manual delete");
When it comes to dirty reads and such, you might want to use the auto-versioning feature of Morphium. This will give every entity a version number. If you want to write to MongoDB and the version number differs, you'd get an exception - meaning the database was modified before you tried to persist your data. This so called optimistic locking will help in most cases to avoid accidental overwriting of data.
To use auto-Versioning, just set the corresponding flag in the @Entity
-annotation to true
and define a Long
in your class, that should hold the version number using the @Version
-annotation.
Attention: do not change the version value manually, this will cause problems writing and will most probably cause loss of data!
usually Morphium knows which collection holds which kind of data. When de-serializing it is easy to know, what class to instanciate.
But when it comes to polymorphism and containers (like lists and maps), things get compicated. Morphium adds in this case the class name as property to the document. Up until version 4.0.0 this was causing some problems when refactoring your Entities. If you changed the classname or the package name of that class, de-serializing was impossible (the classname was obviously wrong).
now you can just set the typeId
in @Entity
to be able refactor more easily. If you already
have data, and you want to refactor your entitiy names, just add the original class name as type id!
One of the very convenient features of SQL-Databases is the support for sequences. This is very useful when trying to have unique IDs.
Morphium implements a feature very similar to SQL-Sequences. Hence it is also called
SequenceGenerator
.
A sequence is a simple implementation in Morphium that uses MongoDB to generate unique numbers. Example:
SequenceGenerator sg = new SequenceGenerator(morphium, "tstseq", 1, 1);
long v = sg.getNextValue();
assert (v == 1) : "Value wrong: " + v;
v = sg.getNextValue();
assert (v == 2);
As those generators use MongoDB for synchronization, they are cluster-safe and can be used by all clients of the same MongoDB simultaneously. No number will be delivered twice!
This test here uses several Threads to access the same SequenceGenerator
:
final SequenceGenerator sg1 = new SequenceGenerator(morphium, "tstseq", 1, 0);
Vector<Thread> thr = new Vector<>();
final Vector<Long> data = new Vector<>();
for (int i = 0; i < 10; i++) {
Thread t = new Thread(() -> {
for (int i1 = 0; i1 < 25; i1++) {
long nv = sg1.getNextValue();
assert (!data.contains(nv)) : "Value already stored? Value: " + nv;
data.add(nv);
try {
Thread.sleep(10);
} catch (InterruptedException e) {
}
}
});
t.start();
thr.add(t);
}
log.info("Waiting for threads to finish");
for (Thread t : thr) {
t.join();
}
long last = -1;
Collections.sort(data);
for (Long l : data) {
assert (last == l - 1);
last = l;
}
log.info("done");
Here is an example, where the sequences are being used by a lot of separate threads each with its own connection to mongodb:
morphium.dropCollection(Sequence.class);
Thread.sleep(100); //wait for the drop to be persisted
//creating lots of sequences, with separate MongoDBConnections
//reading from the same sequence
//in different Threads
final Vector<Long> values=new Vector<>();
List<Thread> threads=new ArrayList<>();
final AtomicInteger errors=new AtomicInteger(0);
for (int i = 0; i < 10; i++) {
Morphium m=new Morphium(MorphiumConfig.fromProperties(morphium.getConfig().asProperties()));
Thread t=new Thread(()->{
SequenceGenerator sg1 = new SequenceGenerator(m, "testsequence", 1, 0);
for (int j=0;j<100;j++){
long l=sg1.getNextValue();
log.info("Got nextValue: "+l);
if(values.contains(l)){
log.error("Duplicate value "+l);
errors.incrementAndGet();
} else {
values.add(l);
}
try {
Thread.sleep((long) (100*Math.random()));
} catch (InterruptedException e) {
}
}
m.close();
});
threads.add(t);
t.start();
}
while (threads.size()>0){
//log.info("Threads active: "+threads.size());
threads.get(0).join();
threads.remove(0);
Thread.sleep(100);
}
assert(errors.get()==0);
Attention after creating a new SequenceGenerator
the currentValue
will be
startValue-inc
in order so that getNextValue()
will return startValue
first.
When migrating to Morphium 4.2.x or higher from older versions the sequences will not be compatible anymore due to a change in ID.
to fix that, you need to run the following command in mongoDB shell:
db.sequence.find({name:{$exists:true}}).forEach(function(x){db.sequence.deleteOne({_id:x._id}); x._id=x.name;delete x.name; db.sequence.save(x);});
Morphium implemented a client side version of auto encrypted fields. When defining a property, you can specify the value to be encrypted. Morphium provides an implementation of AESEncryption, but you could implement any other encryption.
In order for encryption to work, we need to provide a ValueEncryptionProvider
. This is a very simple
interface:
package de.caluga.morphium.encryption;
public interface ValueEncryptionProvider {
void setEncryptionKey(byte[] key);
void setEncryptionKeyBase64(String key);
void setDecryptionKey(byte[] key);
void sedDecryptionKeyBase64(String key);
byte[] encrypt(byte[] input);
byte[] decrypt(byte[] input);
}
There are two implementations available: AESEncryptionProvider
and RSAEncryptionProvider
.
Another interface being used is the EncryptionKeyProvider
, a simple system for managing encryption keys:
package de.caluga.morphium.encryption;
public interface EncryptionKeyProvider {
void setEncryptionKey(String name, byte[] key);
void setDecryptionKey(String name, byte[] key);
byte[] getEncryptionKey(String name);
byte[] getDecryptionKey(String name);
}
The DefaultEncrptionKeyProvider
acutally is a very simple key-value-store and needs to be filled
manually. The implementation PropertyEncryptionKeyProvider
reads those keys from encrypted
property files.
Here is an example, on how to use the transparent encryption:
@Entity
public static class EncryptedEntity {
@Id
public MorphiumId id;
@Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
public String enc;
@Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
public Integer intValue;
@Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
public Float floatValue;
@Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
public List<String> listOfStrings;
@Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
public Subdoc sub;
public String text;
}
@Test
public void objectMapperTest() throws Exception {
morphium.getEncryptionKeyProvider().setEncryptionKey("key", "1234567890abcdef".getBytes());
morphium.getEncryptionKeyProvider().setDecryptionKey("key", "1234567890abcdef".getBytes());
MorphiumObjectMapper om = morphium.getMapper();
EncryptedEntity ent = new EncryptedEntity();
ent.enc = "Text to be encrypted";
ent.text = "plain text";
ent.intValue = 42;
ent.floatValue = 42.3f;
ent.listOfStrings = new ArrayList<>();
ent.listOfStrings.add("Test1");
ent.listOfStrings.add("Test2");
ent.listOfStrings.add("Test3");
ent.sub = new Subdoc();
ent.sub.intVal = 42;
ent.sub.strVal = "42";
ent.sub.name = "name of the document";
//serializing the document needs to encrypt the data
Map<String, Object> serialized = om.serialize(ent);
assert (!ent.enc.equals(serialized.get("enc")));
//checking deserialization used decryption
EncryptedEntity deserialized = om.deserialize(EncryptedEntity.class, serialized);
assert (deserialized.enc.equals(ent.enc));
assert (ent.intValue.equals(deserialized.intValue));
assert (ent.floatValue.equals(deserialized.floatValue));
assert (ent.listOfStrings.equals(deserialized.listOfStrings));
}
Please note, that the key name used for encryption and decryption is to be defined in the property configuration of the corresponding entity.
the config of morphium does have a setting called objectSerializationEnabled
. When set to
true
this will cause morphium to use the standard binary serialization of the JDK to store any
instance of any class that implements serializable
4.
Another setting in the config called warnOnNoEntitySerialization
will create a warning message in log,
when this serialization takes place.
This is set to true
by default, to make development easier. But you probably do not want to use it on
heavy load entities.
To store the binary data, Morphium uses a helper class called BinarySerializedObject
, which
will be shown in MongoDB:
{
"_id" : ObjectId("5f5bc1d8f8fd8247688e41f5"),
"list" : [
{
"original_class_name" : "de.caluga.test.mongo.suite.NonEntitySerialization$NonEntity",
"_b64data" : "rO0ABXNyADtkZS5jYWx1Z2EudGVzdC5tb25nby5zdWl0ZS5Ob25FbnRpdHlTZXJpYWxpemF0aW9u\r\nJE5vbkVudGl0eV18gEK68jkAAgACTAAHaW50ZWdlcnQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAV2\r\nYWx1ZXQAEkxqYXZhL2xhbmcvU3RyaW5nO3hwc3IAEWphdmEubGFuZy5JbnRlZ2VyEuKgpPeBhzgC\r\nAAFJAAV2YWx1ZXhyABBqYXZhLmxhbmcuTnVtYmVyhqyVHQuU4IsCAAB4cAAAACp0ABZUaGFuayB5\r\nb3UgZm9yIHRoZSBmaXNo"
},
"Some string"
]
}
In this case, this "Container" does contain a list of non-entity objects:
@Entity
public class NonEntityContainer {
@Id
private MorphiumId id;
private List<Object> list;
private HashMap<String, Object> map;
public MorphiumId getId() {
return id;
}
public void setId(MorphiumId id) {
this.id = id;
}
public List<Object> getList() {
return list;
}
public void setList(List<Object> list) {
this.list = list;
}
public HashMap<String, Object> getMap() {
return map;
}
public void setMap(HashMap<String, Object> map) {
this.map = map;
}
}
public class NonEntity implements Serializable {
private String value;
private Integer integer;
public String getValue() {
return value;
}
public void setValue(String value) {
this.value = value;
}
public Integer getInteger() {
return integer;
}
public void setInteger(Integer integer) {
this.integer = integer;
}
@Override
public String toString() {
return "NonEntity{" +
"value='" + value + '\'' +
", integer=" + integer +
'}';
}
}
Attention: please keep in mind, that you cannot store non-entities directly. Only a member variable of an entity (even if it is in a list or Map) might be non-entities.
In the jUnit tests, Morphium is tested to support those complex data structures, like lists of lists, lists of maps or maps of lists of entities. I think, you'll get the picture:
public static class CMapListObject extends MapListObject {
private Map<String, List<EmbObj>> map1;
private Map<String, EmbObj> map2;
private Map<String, List<String>> map3;
private Map<String, List<EmbObj>> map4;
private Map<String, Map<String, String>> map5;
private Map<String, Map<String, EmbObj>> map5a;
private Map<String, List<Map<String, EmbObj>>> map6a;
private List<Map<String, String>> map7;
private List<List<Map<String, String>>> map7a;
....
have a look at the Tests in code on github for more examples. the main challenge here is, to determine the right type of elements in the list in order to be able to de-serialize them properly. In this case, de-serialization is done in background transparently:
@Test
public void testListOfListOfMap() {
morphium.dropCollection(MapListObject.class);
CMapListObject o = new CMapListObject();
List<List<Map<String, String>>> lst = new ArrayList<>();
List<Map<String, String>> l2 = new ArrayList<>();
Map<String, String> map = new HashMap<>();
map.put("k1", "v1");
map.put("k2", "v2");
l2.add(map);
map = new HashMap<>();
map.put("k11", "v11");
map.put("k21", "v21");
map.put("k31", "v31");
l2.add(map);
lst.add(l2);
l2 = new ArrayList<>();
map = new HashMap<>();
map.put("k15", "v1");
map.put("k25", "v2");
l2.add(map);
map = new HashMap<>();
map.put("k51", "v11");
map.put("k533", "v21");
map.put("k513", "v31");
l2.add(map);
map = new HashMap<>();
map.put("k512", "v11");
map.put("k514", "v21");
map.put("k513", "v31");
l2.add(map);
lst.add(l2);
o.setMap7a(lst);
morphium.store(o);
CMapListObject ml = morphium.findById(CMapListObject.class, o.getId());
assert (ml.getMap7a().get(1).get(0).get("k15").equals("v1"));
}
as you see here, the deserialization is done transparently in background even on several levels "down", the
CMapListObject
is initialized properly.
Caveat: this can only work, if java knows the type of the elements in the list. As soon as there is a List<Object>
in the type definition, morphium does not know, what the type might be. It will try to deserialize it (which will
work if it is a proper entity), but might not work in all cases. If this detection fails, you'll likely end up
getting a ClassCastException
. If so, try to define the data structure more strictly or simplify it.
To do complex aggregations and analysis of your data in MongoDB the first choice to do that was MapReduce. If necessary or convenient, you can use that with Morphium as well, although it is not as powerful as the Aggregation Framework (see below).
Here is a basic example on how to use MapReduce:
private void doSimpleMRTest(Morphium m) throws Exception {
List<UncachedObject> result = m.mapReduce(UncachedObject.class, "function(){emit(this.counter%2==0,this);}", "function (key,values){var ret={_id:ObjectId(), value:\"\", counter:0}; if (key==true) {ret.value=\"even\";} else { ret.value=\"odd\";} for (var i=0; i<values.length;i++){ret.counter=ret.counter+values[i].counter;}return ret;}");
assert (result.size() == 2);
boolean odd = false;
boolean even = false;
for (UncachedObject r : result) {
if (r.getValue().equals("odd")) {
odd = true;
}
if (r.getValue().equals("even")) {
even = true;
}
assert (r.getCounter() > 0);
}
assert (odd);
assert (even);
}
the problem here is, that you need to write JavaScript code and hence need to switch between contexts, whereas the Aggregation support in Morphium lets you define the whole pipeline in Java.
The write concern aka WriteSafety-Annotation in Morphium is not enough for being on the safe side. the WriteSafety only makes sure, that, if all is ok, data is written to the amount of nodes, you want it to be written. You define the safety level more or less in an Application point of view. This does not affect networking outage or other problems. Also in case of a failover during access, you will end up with an exception in application. In order to deal with the problem, the coding advice for MongoDB is, to have all accesses run in a loop so that you can retry on failure and hope for fast recovery.
Morphium takes care of that: all access to mongo is done in a loop and Morphium tries to detect if that error is recoverable (like a failover) or not. there are several retry-settings in the config.
retry settings in writers
Morphium has 3 different types of writers:
This has some implications, as the core of Morphium is asynchronous, we need to make sure, there are not too many pending writes. (the "pile" is determined by the maximum amount of connections to mongo - hence this is something you won't need to configure)
This is where the retry settings for writers come in. When writing data, this data is either written synchronously or asynchronously. In the latter case, the requests tend to pile up on heavy load. And we need to handle the case, when this pile gets too high. This is the retry. When the pile of pending requests is too high, wait for a specified amount of time and try again to queue the operation. If that fails for all retries - throw an exception.
Retry settings for Network errors
As we had a really sh... network which causes problems more than once a day, we needed to come up with a solution for this as well. As our network does not fail for more than a couple of requests, the idea is to detect network problems and retry the operation after a certain amount of time. This setting is specified globally in Morphium config:
ÂŽÂŽÂŽjava
morphiumConfig.setRetriesOnNetworkError(10);
morphiumConfig.setSleepBetweenNetworkErrorRetries(500);
ÂŽÂŽÂŽ
This causes Morphium to retry any operation on mongo 10 times (if a network related error occurs) and pause 500ms between each try. This includes, reads, writes, updates, index creation and aggregation. If the access failed after the (in this case) 10th try - rethrow the networking error to the caller.
MorphiumConfig
MorphiumConfig is the class to encapsulate all settings for Morphium. The most obvious settings are the host seed and port definitions. But there is a ton of additional settings available.
The standard toString()
method of MorphiumConfig creates an Json String representation of the
configuration. to set all configuration options from a json string, just call createFromJson
.
the configuration can be stored and read from a property object.
MorphiumConfig.fromProperties(Properties p);
Call this method to set all values according to the given
properties. You also can pass the properties to the constructor to have it configured.
To get the properties for the current configuration, call asProperties()
on a configured MorphiumConfig
Object.
Here is an example property-file:
maxWaitTime=1000
maximumRetriesBufferedWriter=1
maxConnections=100
retryWaitTimeAsyncWriter=100
maxAutoReconnectTime=5000
blockingThreadsMultiplier=100
housekeepingTimeout=5000
hosts=localhost\:27017, localhost\:27018, localhost\:27019
retryWaitTimeWriter=1000
globalCacheValidTime=50000
loggingConfigFile=file\:/Users/stephan/_Morphium_/target/classes/_Morphium_-log4j-test.xml
writeCacheTimeout=100
connectionTimeout=1000
database=_Morphium__test
maximumRetriesAsyncWriter=1
maximumRetriesWriter=1
retryWaitTimeBufferedWriter=1000
The minimal property file would define only hosts
and database
. All other values would be
defaulted.
If you want to specify classes in the config (like the Query Implementation), you need to specify the full qualified
class name, e.g. de.caluga.morphium.customquery.QueryImpl
The most straight forward way of configuring Morphium is, using the object directly. This means you call the
getters and setters according to the given variable names above (like setMaxAutoReconnectTime()
).
The minimum configuration is explained above: you only need to specify the database name and the host(s) to connect to. All other settings have sensible defaults, which should work for most cases.
There are a lot of settings and customizations you can do within Morphium. Here we discuss all of them:
true
(convert to camelcase)
ObjectId
. Anytime you write an object with an _id
of that type, the document is
either updated or inserted, depending on whether or not the ID is available or not. If it is inserted, the newly
created ObjectId is being returned and add to the corresponding object. But if the id is not of type ObjectId,
this mechanism will fail, no objectId is being created. This is no problem when it comes to new creation of
objects, but with updates you might not be sure, that the object actually is new or not. If this obtion is set
to true
Morphium will check upon storing, whether or not the object to be stored
is already available in database and would update.
0
â no timeout
true
connections are re-established, when lost. Default is true
0
â try as long as
it takes
@LasChange
, @CreationTime
, @LastAccess
, ...).
If you want to switch this off globally, you can set it in the config. Very useful for test
environments, which should not temper with productional data. By default the auto values are enabled.
@Cache
annotation. By default it's enabled.
@AsyncWrites
annotation
@WriteBuffer
annotation
defaultReadPreference
: whether to read from primary, secondary or nearest by default. Can be
defined with the @ReadPreference
annotation for each entity.
replicaSetMonitoringTimeout
: time interval to update replicaset status.In addition to those settings describing the behaviour of Morphium, you can also define custom classes to be used internally:
Documnet
.
By Default it uses the ObjectMapperImpl
. Your custom implementation must implement the interface
ObjectMapper
.
MorphiumIteratorImpl
is
being used. Your custom implementation must implement the interface MorphiumIterator
Aggregator
interface
AggregatorFactory
interface
QueryFactory
interface.
MorphiumCache
interface. Default is MorphiumCacheImpl
. You need to specify a fully configured cache object here,
not only a class object.
MorphiumConfig.setDriverClass(MetaDriver.class.getName()
. Custom implementations need to
implement the MorphiumDriver
interface. By default the MongodbDriver
is used, which
connects to mongo using the official Java driver. But there are some other implementations, that do have some
advantages (like the inMemoryDriver or the ones from the project here.
In Mongo until V 2.4 authentication and user privileges were not really existent. With 2.4, roles are introduces which might make it a bit more complicated to get things working.
Morphium supports authentication, of course, but on startup. So usually you have an application user, which connects to database. Login to mongo is configured as follows:
MorphiumConfig cfg=new Morpiumconfig(...);
...
cfg.setMongoLogin("tst");
cfg.setMongoPassword("tst");
This user usually needs to have read/write access to the database. If you want your indices to be created automatically by you, this user also needs to have the role dbAdmin for the corresponding database. If you use Morphium with a replicaset of mongo nodes, Morphium needs to be able to get access to local database and get the replicaset status. In order to do so, either the mongo user needs to get additional roles (clusterAdmin and read to local db), or you specify a special user for that task, which has excactly those roles. Morphium authenticates with that different user for accessing replicaSet status (and only for getting the replicaset status) and is configured very similar to the normal login:
cfg.setMongoAdminUser("adm");
cfg.setMongoAdminPwd("adm");
You need to run your mongo nodes with -auth (or authenticate = true set in config) and if you run a replicaset, those nodes need to share a key file or kerberos authentication. (see http://docs.mongodb.org/manual/reference/user-privileges/) Let's assume, that all works for now. Now you need to specify the users. One way of doing that is the following:
add the user for mongo to your main database (in our case tst)
add an admin user for your own usage from shell to admin db (with all privileges)
add the clusterAdmin user to admin db as well, grant read access to local
use admin
db.addUser({user:"adm",pwd:"adm",
roles:["read","clusterAdmin"],
otherDBRoles:{local:["read"]}
})
db.addUser({user:"admin",pwd:"admin",
roles:["dbAdminAnyDatabase",
"readWriteAnyDatabase",
"clusterAdmin",
"userAdminAnyDatabase"]
})
use morphium_test
db.addUser({user:"tst",pwd:"tst",roles:["readWrite","dbAdmin"]})
Here morphium_test is your application database Morphium is connected to primarily. The admin db is a system database.
This is far away from being a complete guide, I hope this just gets you started with authentication....
Entities in Morphium are just "Plain old Java Objects" (POJOs). So you just create your
data objects, as usual. You only need to add the annotation @Entity
to the class, to tell Morphium
"Yes, this can be stored". The only additional thing you need to take care of is the definition of an
ID-Field. This can be any field in the POJO identifying the instance. Its best, to use ObjectID
as type
of this field, as these can be created automatically and you don't need to care about those as well.
If you specify your ID to be of a different kind (like String), you need to make sure, that the String is set, when the object will be written. Otherwise you might not find the object again. So the shortest Entity would look like this:
@Entity
public class MyEntity {
@Id private ObjectId id;
//.. add getter and setter here
}
Indexes are critical in mongo, so you should definitely define your indexes as soon as possible during your
development. Indexes can be defined on the Entity itself, there are several ways to do so: - @Id always creates an
index - you can add an @Index
to any field to have that indexed:
@Index
private String name;
you can define combined indexes using the @Index
annotation at the class itself:
@Index({"counter, name","value,thing,-counter"}
public class MyEntity {
This would create two combined indexes: one with counter
and name
(both ascending) and one
with value
, thing
and descending counter
. You could also define single field
indexes using this annotations, but it's easier to read adding the annotation directly to the field.
Indexes will be created automatically if you create the collection. If you want the indexes to be created,
even if there is already data stores, you need to call morphium.ensureIndicesFor(MyEntity.class)
- You
also may create your own indexes, which are not defined in annotations by calling
morphium.ensureIndex()
. As parameter you pass on a Map containing field name and order (-1 or 1) or
just a prefixed list of strings (like"-counter","name"
).
Every Index might have a set of options which define the kind of this index. Like buildInBackground
or
unique
. You need to add those as second parameter to the Index-Annotation:
@Entity
@Index(value = {"-name, timer", "-name, -timer", "lst:2d", "name:text"},
options = {"unique:1", "", "", ""})
public static class IndexedObject {
here 4 indexes are created. The first two are more or less standard, wheres the lst
index is a
geospatial one and the index on name
is a text index (only since mongo 2.6). If you need to define
options for one of your indexes, you need to define it for all of them (here, only the first index is unique).
MongoDB has a built in text search functionality since V3.x. This can be used in command line, or using
Morphium. In order for it to work, a text index needs to be defined for the entity/collection.
Here an example for an entity called Person
:
@Entity
@Index(value = {"vorname:text,nachname:text,anrede:text,description:text", "age:1"}, options = {"name:myIdx"})
public static class Person {
//properties and getters/setters left out for readability
}
in this case, a text index was built on fields vorname
, nachname
, andrede
and
description
.
To use the index, we need to create a text query5:
@Test
public void textIndexTest() throws Exception {
morphium.dropCollection(Person.class);
try {
morphium.ensureIndicesFor(Person.class);
} catch (Exception e) {
log.info("Text search not enabled - test skipped");
return;
}
createData();
waitForWrites();
Query<Person> p = morphium.createQueryFor(Person.class);
List<Person> lst = p.text(Query.TextSearchLanguages.english, "hugo", "bruce").asList();
assert (lst.size() == 2) : "size is " + lst.size();
p = morphium.createQueryFor(Person.class);
lst = p.text(Query.TextSearchLanguages.english, false, false, "Hugo", "Bruce").asList();
assert (lst.size() == 2) : "size is " + lst.size();
}
In this case, there is some Data created, which puts the name of some superheroes in a mongo. Searching for the text ist something different than searching via regular expressions, because Text Indexes are way more efficient in that case.
If you need more information on text indexes, have a look at MongoDBs documentation and take a look at the Tests for TextIndexes within the source code of Morphium.
Similar as with indexes, you can define you collection to be capped using the @Capped
annotation. This
annotation takes two arguments: the maximum number of entries and the maximum size. If the collection does not
exist, it will be created as capped collection using those two values. You can always ensureCapped your collection,
unfortunately then only the size
parameter will be honoured.
Querying is done via the Query-Object, which is created by Morphium itself (using the Query Factory). The definition of the query is done using the fluent interface:
Query<MyEntity> query=_Morphium_.createQueryFor(MyEntity.class);
query=query.f("id").eq(new ObjectId());
query=query.f("valueField").eq("the value");
query=query.f("counter").lt(22);
query=query.f("personName").matches("[a-zA-Z]+");
query=query.limit(100).sort("counter");
In this example, I refer to several fields of different types. The Query itself is always of the same basic syntax:
queryObject=queryObject.f(FIELDNAME).OPERATION(Value);
queryObject=queryObject.skip(NUMBER); //skip a number of entreis
queryObject=queryObject.limig(NUMBER); // limit result
queryObject.sort(FIELD_TO_SORTBY);`
As field name you may either use the name of the field as it is in mongo or the name of the field in java. If you
specify an unknown field to Morphium, a RuntimeException
will be raised.
For definition of the query, it's also a good practice to define enums for all of your fields. This makes it hard to have mistypes in a query:
public class MyEntity {
//.... field definitions
public enum Fields { id, value, personName,counter, }
}
There is a IntelliJ plugin ("GeneratePropertyEnums") that is used for creating those enums automatically. Then, when defining the query, you don't have to type in the name of the field, just use the field enum:
query=query.f(MyEntity.Fields.counter).eq(123);
This avoids typos and shows compile time errors, when a field was renamed for whatever reason.
After you defined your query, you probably want to access the data in mongo. Via Morphium,there are
several possibilities to do that: - queryObject.get()
: returns the first object matching the query,
only one. Or null if nothing matched - queryObject.asList()
: return a list of all matching objects.
Reads all data in RAM. Useful for small amounts of data - Iterator<MyEntity>
it=queryObject.asIterator()
: creates a MorphiumIterator
to iterate through the data, which
does not read all data at once, but only a couple of elements in a row (default 10).
most of your queries probably are simple ones. like searching for a special id or value. This is done rather simply with the query-Object: morphium.createQueryFor(MyEntity.class).f("field").eq(value) if you add more f(fields) to the query, they will be concatenated by a logical AND. so you can do something like:
Query<UncachedObject> q=morphium.createQueryFor(UncachedObject.class);
q.f("counter").gt(10).f("counter").lt(20);
This would result in a query like: "All Uncached Objects, where counter is greater than 10 and counter is less then 20".
in addition to those AND-queries you can add an unlimited list of queries to it, which will be concatenated by a logical OR.
q.f("counter").lt(100).or(q.q().f("value").eq("Value 12"), q.q().f("value").eq("other"));
This would create a query like: "all UncachedObjects where counter is less than 100 and (value is 'value 12' or value is 'other')"
the Method q() creates a new empty query for the same object. It's a convenience Method. Please be careful, never use your query Object in the parameter list of or - this would cause and endless loop! ATTENTION here!
This gives you the possibility to create rather complex queries, which should handle about 75% of all cases. Although you can also add some NOR-Queries as well. These are like "not or"-Queries....
q.f("counter").lt(100).nor(q.q().f("counter").eq(90), q.q().f("counter").eq(55));
this would result in a query like: "All query objects where counter is less than 100 and not (counter=90 or counter=55).
this adds another complexity level to the queries ;-)
If that's not enough, specify your own query in "mongo"-Syntax.
You can also specify your own query object (Map<String,Object>) in case of a very complex query. This is part of the Query-Object and can be used rather easily:
Map<String,Object> query=new HashMap<>();
query.put("counter",Utils.getMap("$lt",10));
Query<UncachedObject> q=MorphiumSingleton.get().createQueryFor(UncachedObject.class);
List<UncachedObject> lst=q.complexQuery(query);
Although, in this case the query is a very simple one (counter < 10), but I think you get the Idea....
Well, the fluent query interface does have its limitations. So its not possible to have a certain number of or-concatenated queries (like (counter 14 or Counter <10) and (counter >50 or counter 30)). I'm not sure, this is very legible...
Morphium has support for a special Iterator, which steps through the data, a couple of elements at a time. By Default this is the standard behaviour. But the _Morphium_Iterator ist quite capable:
queryObject.asIterable()
will stepp through the result list, 10 at a timequeryObject.asIterable(100)
will step through the result list, 100 at a timequeryObject.asIterable(100,5)
will step through the result list, 100 at a time and keep 4 chunks of
100 elements each as prefetch buffers. Those will be filled in background.
MorphiumIterator it=queryObject.asIterable(100,5); it.setmultithreadedAccess(true);
use the same
iterator as before, but make it thread safe.
Description
Problem is, when dealing with huge tables or lots of data, you'd probably include paging to your queries. You would read data in chunks of for example 100 objects to avoid memory overflows. This is now available by Morphium. The new MorphiumIterator works as Iterable or Iterator - whatever you like. It's included in the Query-interface and can be used very easily:
Query<Type> q=morphium.createQueryFor(Type.class);
q=q.f("field").eq..... //whatever
for (Type t:q.asIterable()) {
//do something with t
}
This creates an iterator, reading all objects from the query in chunks of 10... if you want to read them one by one, you only ned to give the chunk-size to the call:
for (Type t:q.asIterable(1)) {
//now reads every single Object from db
}
You can also use the iterator as in the "good ol' days".
Iterator<Type> it=q.asIterable(100); //reads objects in chunks of 100
while (it.hasNext()) {
... //do something
}
If you use the MorphiumIterator as the type it actually is, you'd get even more information:
MorphiumIterator<Type> it=q.asIterable(100);
it.next();
....
long count=it.getCount(); //returns the number of objects to be read
int cursorPos=it.getCursor(); //where are we right now, how many did we read
it.ahead(5); //jump ahead 5 objects
it.back(4); //jump back
int bufferSize=it.getCurrentBufferSize(); //how many objects are currently stored in RAM
List<Type> lst=it.getCurrentBuffer(); //get the objects in RAM
Attention: the count is the number of objects matching the query at the instanciation of the iterator. This ensures, that the iterator terminates. The Query will be executed every time the buffer boundaries are reached. It might cause unexpected results, if the sort of the query is wrong.
For example:
//created Uncached Objects with counter 1-100; value is always "v"
Query<UncachedObject> qu=morphium.createQueryFor(UncachedObject.class).sort("-counter");
for (UncachedObject u:qu.asIterable()) {
UncachedObject uc=new UncachedObject();
uc.setCounter(u.getCounter()+1);
uc.setValue("WRONG!");
MorphiumSingleton.get().store(uc);
log.info("Current Counter: "+u.getCounter()+" and Value: "+u.getValue());
}
The output is as follows:
14:21:10,494 INFO [main] IteratorTest: Current Counter: 100 and Value: v
14:21:10,529 INFO [main] IteratorTest: Current Counter: 99 and Value: v
14:21:10,565 INFO [main] IteratorTest: Current Counter: 98 and Value: v
14:21:10,610 INFO [main] IteratorTest: Current Counter: 97 and Value: v
14:21:10,645 INFO [main] IteratorTest: Current Counter: 96 and Value: v
14:21:10,680 INFO [main] IteratorTest: Current Counter: 95 and Value: v
14:21:10,715 INFO [main] IteratorTest: Current Counter: 94 and Value: v
14:21:10,751 INFO [main] IteratorTest: Current Counter: 93 and Value: v
14:21:10,786 INFO [main] IteratorTest: Current Counter: 92 and Value: v
14:21:10,822 INFO [main] IteratorTest: Current Counter: 91 and Value: v
14:21:10,857 INFO [main] IteratorTest: Current Counter: 96 and Value: WRONG!
14:21:10,892 INFO [main] IteratorTest: Current Counter: 95 and Value: v
14:21:10,927 INFO [main] IteratorTest: Current Counter: 95 and Value: WRONG!
14:21:10,963 INFO [main] IteratorTest: Current Counter: 94 and Value: v
14:21:10,999 INFO [main] IteratorTest: Current Counter: 94 and Value: WRONG!
14:21:11,035 INFO [main] IteratorTest: Current Counter: 93 and Value: v
14:21:11,070 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:11,105 INFO [main] IteratorTest: Current Counter: 92 and Value: v
14:21:11,140 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:11,175 INFO [main] IteratorTest: Current Counter: 91 and Value: v
14:21:11,210 INFO [main] IteratorTest: Current Counter: 94 and Value: WRONG!
14:21:11,245 INFO [main] IteratorTest: Current Counter: 94 and Value: WRONG!
14:21:11,284 INFO [main] IteratorTest: Current Counter: 93 and Value: v
14:21:11,328 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:11,361 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:11,397 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:11,432 INFO [main] IteratorTest: Current Counter: 92 and Value: v
14:21:11,467 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:11,502 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:11,538 INFO [main] IteratorTest: Current Counter: 91 and Value: v
14:21:11,572 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:11,607 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:11,642 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:11,677 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:11,713 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:11,748 INFO [main] IteratorTest: Current Counter: 92 and Value: v
14:21:11,783 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:11,819 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:11,853 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:11,889 INFO [main] IteratorTest: Current Counter: 91 and Value: v
14:21:11,923 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:11,958 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:11,993 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:12,028 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:12,063 INFO [main] IteratorTest: Current Counter: 92 and Value: v
14:21:12,098 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,133 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,168 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,203 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,239 INFO [main] IteratorTest: Current Counter: 91 and Value: v
14:21:12,273 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:12,308 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:12,344 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:12,379 INFO [main] IteratorTest: Current Counter: 92 and Value: v
14:21:12,413 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,448 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,487 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,521 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,557 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,592 INFO [main] IteratorTest: Current Counter: 91 and Value: v
14:21:12,626 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:12,662 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:12,697 INFO [main] IteratorTest: Current Counter: 92 and Value: v
14:21:12,733 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,769 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,804 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,839 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,874 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,910 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:12,945 INFO [main] IteratorTest: Current Counter: 91 and Value: v
14:21:12,980 INFO [main] IteratorTest: Current Counter: 93 and Value: WRONG!
14:21:13,015 INFO [main] IteratorTest: Current Counter: 92 and Value: v
14:21:13,051 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,085 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,121 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,156 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,192 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,226 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,262 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,297 INFO [main] IteratorTest: Current Counter: 91 and Value: v
14:21:13,331 INFO [main] IteratorTest: Current Counter: 92 and Value: v
14:21:13,367 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,403 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,446 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,485 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,520 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,556 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,592 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,627 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,662 INFO [main] IteratorTest: Current Counter: 91 and Value: v
14:21:13,697 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,733 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,768 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,805 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,841 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,875 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,911 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,946 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:13,982 INFO [main] IteratorTest: Current Counter: 92 and Value: WRONG!
14:21:14,017 INFO [main] IteratorTest: Current Counter: 91 and Value: v
14:21:14,017 INFO [main] IteratorTest: Cleaning up...
14:21:14,088 INFO [main] IteratorTest: done...
The first chunk is ok, but all that follow are not. Fortunately count did not change or in this case, the iterator would never stop. Hence, if your collection changes while you're iterating over it, you might get inexpected results. Writing to the same collection within the loop of the iterator is generally a bad idea...
Advanced Features
Since V2.2.5 the Morphium iterator supports lookahead (prefetching). This means its not only possible to define a window size to step through your data, but also how many of those windows should be prefetched, while you step through the first one.
This works totally transparent for the user, its just a simple call to activate this feature:
theQuery.asIterable(1000,5); //window size 1000, 5 windows prefetch
Since 2.2.5 the Morphium iterator is also able to be used by multiple threads simultaneously. This means, several threads access the same iterator. This might be useful for querying and alike.
To use that, you only need to set setMultithreaddedAccess
to true in the iterator itself:
MorphiumIterator<MyEntity> it=theQuery.asIterable(1000,15)
it.setMultithreaddedAccess(true);
Attention: Setting mutlithreaddedAccess to true will cause the iterator to be a bit slower as it has to do
some things in a synchronized
fashion.
Storing is more or less a very simple thing, just call _Morphium_.store(pojo)
and you're done. Although
there is a bit more to it: - if the object does not have an id (id field is null
), there will be a new
entry into the corresponding collection. - if the object does have an id set (!= null
), an update to db
is being issued. - you can call _Morphium_.storeList(lst)
where lst is a list of entities. These would
be stored in bulkd, if possible. Or it does a bulk update of things in mongo. Even mixed lists (update and inserts)
are possible. Morphium will take care of sorting it out - there are additional methods for writing
to mongo, like update operations set
, unset
, push
, pull
and so
on (update a value on one entity or for all elements matching a query), delete
objects or objects
matching a query, and a like - The writer that acutally writes the data, is chosen depending on the configuration of
this entity (see Annotations below)
Morphium by defaults converts all java CamelCase identifiers in underscore separated strings. So,
MyEntity
will be stored in an collection called my_entity
and the field
aStringValue
would be stored in as a_string_value
.
When specifying a field, you can always use either the transformed name or the name of the corresponding java field. Collection names are always determined by the classname itself.
But in Morphium you can of course change that behaviour. Easiest way is to switch off the
transformation of CamelCase globally by setting camelCaseConversionEnabled
to false (see above:
Configuration). If you switch it off, its off completely - no way to do switch it on for just one collection or so.
If you need to have only several types converted, but not all, you have to have the conversion globally enabled, and
only switch it off for certain types. This is done in either the @Entity
or @Embedded
annotation.
@Entity(convertCamelCase=false)
public class MyEntity {
private String myField;
This example will create a collection called MyEntity
(no conversion) and the field will be called
myField
in mongo as well (no conversion).
Attention: Please keep in mind that, if you switch off camelCase conversion globally, nothing will be converted!
you can tell Morphium to use the full qualified classname as basis for the collection name, not the
simple class name. This would result in createing a collection de_caluga_morphium_my_entity
for a class
called de.caluga.morphium.MyEntity
. Just set the flag useFQN
in the entity annotation to
true
.
@Entity(useFQN=true)
public class MyEntity {
Recommendation is, not to use the full qualified classname unless it's really needed.
In addition to that, you can define custom names of fields and collections using the corresponding annotation (@Entity
,
@Property
).
For entities you may set a custom name by using the collectionName
value for the annotation:
@Entity(collectionName="totallyDifferent")
public class MyEntity {
private String myValue;
}
the collection name will be totallyDifferent
in mongo. Keep in mind that camel case conversion for
fields will still take place. So in that case, the field name would probably be my_value
. (if camel
case conversion is enabled in config)
You can also specify the name of a field using the property annotation:
@Property(fieldName="my_wonderful_field")
private String something;
Again, this only affects this field (in this case, it will be called my_wondwerful_field
in mongo) and
this field won't be converted camelcase. This might cause a mix up of cases in your MongoDB, so please use this with
care.
When accessing fields in Morphium (especially for the query) you may use either the name of the Field in Java (like myEntity) or the converted name depending on the config (camelCased or not, or custom).
In some cases it might be necessary to have the collection name calculated dynamically. This can be achieved using
the NameProvider
Interface.
You can define a NameProvider for your entity in the @Entity
annotation. You need to specify the type
there. By default, the NameProvider for all Entities is DefaultNameProvider
. Which actually looks like
this:
public final class DefaultNameProvider implements NameProvider {
@Override
public String getCollectionName(Class<?> type, ObjectMapper om, boolean translateCamelCase, boolean useFQN, String specifiedName, _Morphium_ _Morphium_) {
String name = type.getSimpleName();
if (useFQN) {
name = type.getName().replaceAll("\\.", "_");
}
if (specifiedName != null) {
name = specifiedName;
} else {
if (translateCamelCase) {
name = _Morphium_.getARHelper().convertCamelCase(name);
}
}
return name;
}
}
You can use your own provider to calculate collection names depending on time and date or for example depending on the querying host name (like: create a log collection for each server separately or create a collection storing logs for only one month each).
Attention: Name Provider instances will be cached, so please implement them thread safe.
mongo is really fast and stores a lot of date in no time. Sometimes it's hard then, to get this data out of mongo again, especially for logs this might be an issue (in our case, we had more than a 100 million entries in one collection). It might be a good idea to change the collection name upon some rule (by date, timestamp whatever you like). Morphium supports this using a strategy-pattern.
public class DatedCollectionNameProvider implements NameProvider{
@Override
public String getCollectionName(Class<?> type, ObjectMapper om, boolean translateCamelCase, boolean useFQN, String specifiedName, Morphium morphium) {
SimpleDateFormat df=new SimpleDateFormat("yyyyMM");
String date=df.format(new Date());
String ret=null;
if (specifiedName!=null) {
ret=specifiedName+="_"+date;
} else {
String name = type.getSimpleName();
if (useFQN) {
name=type.getName();
}
if (translateCamelCase) {
name=om.convertCamelCase(name);
}
ret=name+"_"+date;
}
return ret;
}
}
This would create a monthly named collection like "my_entity_201206". In order to use that name provider,
just add it to your @Entity
-Annotation:
@Entity(nameProvider = DatedCollectionNameProvider.class)
public class MyEntity {
....
}
performance:
The name provider instances themselves are cached for each type upon first use, so you actually might do as much work as possible in the constructor.
BUT: on every read or store of an object the corresponding name provider method getCollectionName
is
called, this might cause Performance drawbacks, if you logic in there is quite heavy and/or time consuming.
This is something quite common: you want to know, when your data was last changed and maybe who did it. Usually you keep a timestamp with your object and you need to make sure, that these timestamps are updated accordingly. Morphium does this automatically - just declare the annotations:
@Entity
@NoCache
@LastAccess
@LastChange
@CreationTime
public static class TstObjLA {
@Id
private ObjectId id;
@LastAccess
private long lastAccess;
@LastChange
private long lastChange;
@CreationTime
private long creationTime;
private String value;
public long getLastAccess() {
return lastAccess;
}
public void setLastAccess(long lastAccess) {
this.lastAccess = lastAccess;
}
public long getLastChange() {
return lastChange;
}
public void setLastChange(long lastChange) {
this.lastChange = lastChange;
}
public long getCreationTime() {
return creationTime;
}
public void setCreationTime(long creationTime) {
this.creationTime = creationTime;
}
public String getValue() {
return value;
}
public void setValue(String value) {
this.value = value;
}
}
You might ask, why do we need to specify, that access time is to be stored for the class and the field. The reason is: Performance! In order to search for a certain annotation we need to read all fields of the whole hierarchy the of the corresponding object which is rather expensive. In this case, we only search for those access fields, if necessary. All those are stored as long - System.currentTimeMillies()
Explanation:
@LastAccess
: Stores the last time, this object was read from db! Careful with that one: it will create a
write access, for every read!
@CreationTime
: Stores the creation timestamp
@LastChange
: Timestamp the last moment, this object was stored.
All writer implementation support asynchronous calls like
public <T> void store(List<T> lst, AsyncOperationCallback<T> callback);
if callback==null the method call should be synchronous... If callback!=null do the call to mongo asynchronous in background. Usually, you specify the default behaviour in your class definition:
@Entity
@AsyncWrites
public class EntityType {
...
}
All write operations to this type will be asynchronous! (synchronous call is not possible in this case!).
Asynchronous calls are also possible for Queries, you can call q.asList(callback) if you want to have this query be executed in background.
Asynchronous calls will be issued at once to the mongoDb but the calling thread will not have to wait. It will be
executed in Background. the @WriteBuffer
annotation specifies a write buffer for this type (you can
specify the size etc if you like). All writes will be held temporarily in ram until time frame is reached or the
number of objects in write buffer exceeds the maximum you specified (0 means no maximum). Attention if you shut down
the Java VM during that time, those entries will be lost. Please only use that for logging or "not so important"
data. specifying a write buffer four you entitiy is quite easy:
@Entity
@WriteBuffer(size=1000, timeout=5000)
public class MyBufferedLog {
....
}
This means, all write access to this type will be stored for 5 seconds or 1000 entries, whichever occurs first. If you want to specify a different behavior when the maximum number of entries is reached, you can specify a strategy:
WRITE_NEW
: write newest entry (synchronous and not add to buffer)WRITE_OLD
: write some old entries (and remove from buffer)DEL_OLD
: delete old entries from buffer - oldest elements won't be written to Mongo!IGNORE_NEW
: just ignore incoming - newest elements WILL NOT BE WRITTEN!JUST_WARN
: increase buffer and warn about itMorphium does support for javax.validation annotations and those might be used to ensure data quality:
@Id
private MorphiumId id;
@Min(3)
@Max(7)
private int theInt;
@NotNull
private Integer anotherInt;
@Future
private Date whenever;
@Pattern(regexp = "m[ueĂŒ]nchen")
private String whereever;
@Size(min = 2, max = 5)
private List friends;
@Email
private String email;
You do not need to have any validator implementation in classpath, Morphium detects, if validation is available and only enables it then.
a lot of things can be configured in Morphium using annotations. Those annotations might be added to either classes, fields or both.
Perhaps the most important Annotation, as it has to be put on every class the instances of which you want to have stored to database. (Your data objects).
By default, the name of the collection for data of this entity is derived by the name of the class itself and then the camel case is converted to underscore strings (unless config is set otherwise).
These are the settings available for entities:
translateCamelCase
: default true. If set, translate the name of the collection and all fields (only
those, which do not have a custom name set)
collectionName
: set the collection name. May be any value, camel case won't be converted.useFQN
: if set to true, the collection name will be built based on the full qualified class name.
The Classname itself, if set to false. Default is false
polymorph
: if set to true, all entities of this type stored to mongo will contain the full
qualified name of the class. This is necessary, if you have several different entities stored in the same
collection. Usually only used for polymorph lists. But you could store any polymorph marked object into that
collection Default is false
nameProvider
: specify the class of the name provider, you want to use for this entity. The name
provider is being used to determine the name of the collection for this type. By Default it uses the DefaultNameProvider
(which just uses the classname to build the collection name). see above
Marks POJOs for object mapping, but don't need to have an ID set. These objects will be marshalled and un-marshalled, but only as part of another object (Subdocument). This has to be set at class level.
You can switch off camel case conversion for this type and determine, whether data might be used polymorph.
ensures, that all write accesses to this entity are asynchronous.
switches OFF caching for this entity. This is useful if some superclass might have caches enabled and we need to disable it here.
Valid at: Class level
Tells Morphium to create a capped collection for this object (see capped collections above).
Parameters:
These are the collation settings for this given entity. will be used when creating new collections and indices
Special feature for Morphium: this annotation has to be added for at lease one field of type Map<String,Object>. It does make sure, that all data in Mongo, that cannot be mapped to a field of this entity, will be added to the annotated Map properties.
by default this map is read only. But if you want to change those values or add new ones to it, you can set readOnly=false
.
It's possible to define aliases for field names with this annotation (hence it has to be added to a field).
@Alias({"stringList","string_list"})
List<String> strLst;
in this case, when reading an object from MongoDB, the name of the field strLst
might also be stringList
or string_list
in mongo. When storing it, it will always be stored as strLst
or str_lst
according to configs camelcase settings.
This feature comes in handy when migrating data.
has to be added to both the class and the field(s) to store the creation time in. This value is set in the moment, the object is being stored to mongo. The data type for creation time might be:
long
/ Long
: store as timestampDate
: store as date objectString
: store as a string, you may need to specify the format for thatsame as creation time, but storing the last access to this type. Attention: will cause all objects read to be updated and written again with a changed timestamp.
Usage: find out, which entries on a translation table are not used for quite some time. Either the translation is not necessary anymore or the corresponding page is not being used.
Same as the two above, except the timestamp of the last change (to mongo) is being stored. The value will be set, just before the object is written to mongo.
Define the read preference level for an entity. This annotation has to be used at class level. Valid types are:
PRIMARY
: only read from primary nodePRIMARY_PREFERED
: if possible, use primary.SECONDARY
: only read from secondary nodeSECONDARY_PREFERED
: if possible, use secondaryNEAREST
: I don't care, take the fastestVery important annotation to a field of every entity. It marks that field to be the id and identify any object. It
will be stored as _id
in mongo (and will get an index).
The Id may be of any type, though usage of ObjectId is strongly recommended.
Define indexes. Indexes can be defined for a single field. Combined indexes need to be defined on class level. See above.
List of fields in class, that can be ignored. Defaults no none.
usually an exact match, but can use ~ as substring, / as regex marker
Field names are JAVA Fields, not translated ones for mongo
IgnoreFields
will not be honored for fields marked with @Property
and a custom fieldname
this will be inherited by subclasses!
@Entity
@IgnoreFields({"var1", "var3"})
public class TestClass {
@Id
public MorphiumId id;
public int var1;
public int var2;
public int var3;
}
this is a positive list of fields to use for MongoDB. All fields, not listed here will be ignored when it comes to mongodb.
@Entity
@LimitToFields({"var1"})
public class TestClass2 {
@Id
public MorphiumId id;
public int var1;
public int var2;
public int var3;
}
LimitToFields
also takes a Class as an argument, then the fields will be limited to the fields of the
given class.
@Entity
@LimitToFields(type = TestClass2.class)
public class TestClass3 extends TestClass2 {
public String notValid;
}
Can be added to any field. This not only has documenting character, it also gives the opportunity to change the name
of this field by setting the fieldName
value. By Default the fieldName is ".", which means
"fieldName based".
Mark an entity to be read only. You'll get an exception when trying to store.
Mark a field to keep the current Version number. Field needs to be of type Long!
If you have a member variable, that is a POJO and not a simple value, you can store it as reference to a different collection, if the POJO is an Entity (and only if!).
This also works for lists and Maps. Attention: when reading Objects from disk, references will be de-referenced, which will result into one call to mongo each.
Unless you set lazyLoading
to true, in that case, the child documents will only be loaded when accessed.
Morphium supports lazy loading of references. This is easy to use, just add @Reference(lazyLoading=true)
to the reference you want to have them loaded lazyly.
@Entity
public class MyEntity {
....
@Reference(lazyLoading=true)
private UncachedObject myReference; //will be loaded when first accessed
@Reference
private MyEntity ent; //will be loaded when this object is loaded - use with caution
//this could cause an endless loop
private MyEntity embedded; //this object is not available on its own
//its embedded as subobject in this one
}
When a reference is being lazy loaded, the corresponding field will be set with a Proxy for an instance of the correct type, where only the ObjectID is set. Any access to it will be catched by the proxy, and any method will cause the object to be read from DB and deserialized. Hence this object will only be loaded upon first access.
It should be noted that when using Object.toString();
for testing that the object will be loaded from
the database and appear to not be lazy loaded. In order to test Lazy Loading you should load the base object with
the lazy reference and access it directly and it will be null. Additionally the referenced object will be null until
the references objects fields are accessed.
Do not store the field - similar to @IgnoreFields
or @LimitToFields
Cache settings for this entity, see the chapter about transparent caching above for more details.
Encryption settings for this field. See chapter about field encryption for details
Usually, Morphium does not store null values. That means, the corresponding document just would not contain the given field(s) at all.
Sometimes that might cause problems, so if you add @UseIfNull
to any field, it will be stored into mongo
even if it is null.
this annotation for an Entity tells morphium, that this entity does have some lifecycle methods defined. Those methods all need to be marked with the corresponding annotation:
@PostLoad
@PostRemove
@PostStore
@PostUpdate
@PreRemove
- may throw a MorphiumAccessVetoException
to abort the removal@PreStore
- may throw a MorphiumAccessVetoException
to abort store@PreUpdate
- may throw a MorphiumAccessVetoException
to abort updatethe methods where those annotations are added must not have any parameters. They should only access the local object/entity.
only used auto-versioning is enabled in @Entity
. Defines the field to hold the version number.
Specify the safety for this entity when it comes to writing to mongo. This can range from "NONE" to "WAIT FOR ALL SLAVES". Here are the available settings:
IGNORE_ERRORS
None, no checking is doneNORMAL
None, network socket errors raisedBASIC
Checks server for errors as well as network socket errors raisedWAIT_FOR_SLAVE
Checks servers (at lease 2) for errors as well as network socket errors
raised
MAJORITY
Wait for at least 50% of the slaves to have written the dataWAIT_FOR_ALL_SLAVES
: waits for all slaves to have committed the data. This is depending on
how many slaves are available in replica set. Wise timeout settings are important here. See WriteConcern
in MongoDB Java-Driver for additional information
Morphium is tracking the cluster status internally in order to react properly on different scenarios6. For example, if one node goes down, waiting for all nodes to write the data will result in the application blocking until the last cluster member came back up again.
This is defined by the w
-Setting in WriteSafety
. In a nutshell, it tells mongo on how many
cluster nodes you want to have written, and will wait until this number is reached.
This caused major problems with our environments, like having different cluster configurations in test and production environments.
Morphium fixes that issue in that way, that when "WAIT_FOR_ALL_SLAVES" is defined in WriteSafety
,
it will set the w
-value according to the number of available slaves, resulting in no blocking.
7
By default, Java does not support the inheritance of annotations. This is ok in most cases, but in the case of entities it's a bugger. We added inheritance to Morphium to be able to build flexible data structures and store them to mongo.
Well, it's quite easy, actually ;-) The algorithm for getting the inherited annotations looks as follows (simplified)
This way, all annotations in the hierarchy are taken into account and the most recent one is taken. You can always change the annotations when subclassing, although you cannot "erase" them (which means, if you inherit from an entity, it's always an entity). For Example:
@Entity
@NoCache
public class Person {
@Id
private ObjectId id;
....
}
And the subclass:
@Cache(writeCache=true, readCache=false)
public class Parent {
@Reference
private List<Person> parentFrom;
...
}
Please keep in mind, that unless specified otherwise, the classname will be taken as the name for your collection. Also, be sure to store your classname in the collection (set polymorph=true in @Entity annotation) if you want to store them in one collection.
MongoDB introduced a feature called changestreams with V4.0 of mongodb. This is a special search that returns all changes to a database or collection. This is very useful if you want to be notified about changes to certain types or about certain commands being run.
Changestreams are only available when connected to a replicaset.
Morphium does support changestreams, in fact the messaging subsystem is built completely relying on this feature.
The easiest way to use changestreams is to use Morphiums ChangeStreamMonitor
:
ChangeStreamMonitor m = new ChangeStreamMonitor(morphium, UncachedObject.class);
m.start();
final AtomicInteger cnt = new AtomicInteger(0);
m.addListener(evt -> {
printevent(evt);
cnt.set(cnt.get() + 1);
return true;
});
Thread.sleep(1000);
for (int i = 0; i < 100; i++) {
morphium.store(new UncachedObject("value " + i, i));
}
Thread.sleep(5000);
m.terminate();
assert (cnt.get() >= 100 && cnt.get() <= 101) : "count is wrong: " + cnt.get();
morphium.store(new UncachedObject("killing", 0));
The monitor by definition runs asynchronous, it uses the watch
methods to database or collection.
morphium.watch(Class type, boolean updateFullDocument,ChangeStreamListener lst)
: this watches in a
synchronous call for any change event. This call blocks! until the Listener returns false
morphium.watchAsync(...)
(same parameters as above), runs asynchronously. attention: the
Settings for asyncExcecutor in MorphiumConfig
might affect the behaviour of this call.
There are also methods for watching all changes, that happen in the connected database. This might result in
a lot of callbacks: watchDB()
and watchDBAsync()
.
there is also an older implementation of this, the OplogMonitor
. This one does more or less the same
thing as the ChangeStreamMonitor
, but also runs with older installations of MongoDB (when connected to
a ReplicaSet).
You'd probably want to use the ChangestreamListener
instead, as it is more efficient.
OplogListener lst = data -> {
log.info(Utils.toJsonString(data));
gotIt = true;
};
OplogMonitor olm = new OplogMonitor(morphium);
olm.addListener(lst);
olm.start();
Thread.sleep(100);
UncachedObject u = new UncachedObject("test", 123);
morphium.store(u);
Thread.sleep(1250);
assert (gotIt);
gotIt = false;
morphium.set(u, UncachedObject.Fields.value, "new value");
Thread.sleep(550);
assert (gotIt);
gotIt = false;
olm.removeListener(lst);
u = new UncachedObject("test", 123);
morphium.store(u);
Thread.sleep(200);
assert (!gotIt);
olm.stop();
The idea behind partial updates is, that only the changes to an entity are transmitted to the database and will thus reduce the load on network and MongoDB itself.
This is the easiest way - you already know, what fields you changed and maybe you even do not want to store fields, that you actually did change. In that case, call the updateUsingFields-Method:
UncachedObject o....
o.setValue("A value");
o.setCounter(105);
Morphium.get().updateUsingFields(o,"value");
//does only send updates for Value to mongodb
//counter is ignored
updateUsingFields()
honours the lifecycle methods as well as caches (write cache or clear read_cache on
write). take a look at some code from the corresponding JUnit test for better understanding:
UncachedObject o... //read from MongoDB
o.setValue("Updated!");
morphium.updateUsingFields(o, "value");
log.info("uncached object altered... look for it");
Query<UncachedObject> c=morphium.createQueryFor(UncachedObject.class);
UncachedObject fnd= (UncachedObject) c.f("_id").eq( o.getMongoId()).get();
assert(fnd.getValue().equals("Updated!")):"Value not changed? "+fnd.getValue();
If you need to send a lot of write requests to MongoDB, it might be useful to use bulk requests for that. MongoDB does have support for that. It means, that not each command is sent on its own, but all are sent in one single bulk command to the database, which is a lot more efficient.
To use that via Morphium you need to add your requests to the BulkRequestContext
:
MorphiumBulkContext c = morphium.createBulkRequestContext(UncachedObject.class, false);
c.addSetRequest(morphium.createQueryFor(UncachedObject.class).f("counter").gte(0), "counter", 999, true, true);
//could add more requests here
Map<String, Object> ret = c.runBulk();
There are all basic operations you might send in a bulk:
If there is a special request, where there is no direct support in bulk context, use the generic method addCustomUpdateRequest()
for adding a request. You need to pass on your requests Map-Representation.
MongoDB does have support for transactions in newer releases. Morphium does support that as well:
@Test
public void transactionTest() throws Exception {
for (int i = 0; i < 10; i++) {
try {
morphium.createQueryFor(UncachedObject.class).delete();
Thread.sleep(100);
TestEntityNameProvider.number.incrementAndGet();
log.info("Entityname number: " + TestEntityNameProvider.number.get());
createUncachedObjects(10);
Thread.sleep(100);
morphium.startTransaction();
Thread.sleep(100);
log.info("Count after transaction start: " + morphium.createQueryFor(UncachedObject.class).countAll());
UncachedObject u = new UncachedObject("test", 101);
morphium.store(u);
Thread.sleep(100);
long cnt = morphium.createQueryFor(UncachedObject.class).countAll();
if (cnt != 11) {
morphium.abortTransaction();
assert (cnt == 11) : "Count during transaction: " + cnt;
}
morphium.inc(u, "counter", 1);
Thread.sleep(100);
u = morphium.reread(u);
assert (u.getCounter() == 102);
morphium.abortTransaction();
Thread.sleep(100);
cnt = morphium.createQueryFor(UncachedObject.class).countAll();
u = morphium.reread(u);
assert (u == null);
assert (cnt == 10) : "Count after rollback: " + cnt;
} catch (Exception e) {
log.error("ERROR", e);
morphium.abortTransaction();
}
}
}
Internally, Morphium uses the transaction context if this thread started a transaction (if you need a transaction spanning over Threads, you need to pass on the current transaction session:
ctx=morphium.getDriver().getTransactionContext();
...
//other thread
morphium.getDriver().setTransactionContext(ctx);
Caveat: mongoDB does not support nested transactions (yet), so you will get an
Exception
when trying to start another transaction in the same thread.
there are a lot of listeners in Morphium that help you get informed about what is going on in the system. Some of which also might help you, to adapt behaviour according to your needs:
Morphium is monitoring the status of the replicaset it is connected to (default is every 5s, but can be
changed in MorphiumConfigs setting replicaSetMonitoringTimeout
). You can get this information on
demand, by calling morphium.getReplicasetStatus()
.
But you can also be informed whenever there is a change in the cluster by implementing the interface (since Morphium V4.2):
public interface ReplicasetStatusListener {
void gotNewStatus(Morphium morphium, ReplicaSetStatus status);
/**
* infoms, if replicaset status could not be optained.
* @param numErrors - how many errors getting the status in a row we already havei
*/
void onGetStatusFailure(Morphium morphium, int numErrors);
/**
* called, if the ReplicasetMonitor aborts due to too many errors
* @param numErrors - number of errors occured
*/
void onMonitorAbort(Morphium morphium, int numErrors);
/**
*
* @param hostsDown - list of hostnamed not up
* @param currentHostSeed - list of currently available replicaset members
*/
void onHostDown(Morphium morphium, List<String> hostsDown,List<String> currentHostSeed);
}
The ReplicasetStatus
does contain a lot of information about the replicaset itself:
public class ReplicaSetStatus {
private String set;
private String myState;
private String syncSourceHost;
private Date date;
private int term;
private int syncSourceId;
private long heartbeatIntervalMillis;
private int majorityVoteCount;
private int writeMajorityCount;
private int votingMembersCount;
private int writableVotingMembersCount;
private long lastStableRecoveryTimestamp;
private List<ReplicaSetNode> members;
private Map<String,Object> optimes;
private Map<String,Object> electionCandidateMetrics;
}
public class ReplicaSetNode {
private int id;
private String name;
private double health;
private int state;
@Property(fieldName = "stateStr")
private String stateStr;
private long uptime;
@Property(fieldName = "optimeDate")
private Date optimeDate;
@Property(fieldName = "lastHeartbeat")
private Date lastHeartbeat;
private int pingMs;
private String syncSourceHost;
private int syncSourceId;
private String infoMessage;
private Date electionDate;
private int configVersion;
private int configTerm;
private String lastHeartbeatMessage;
private boolean self;
}
See mongoDB documentation of rs.status()
command for more
information on the different fields.
Via this interface, you will be informed about cache operations and may interfere with them or change the behaviour:
public interface CacheListener {
/**
* ability to alter cached entries or avoid caching overall
*
* @param toCache - datastructure containing cache key and result
* @param <T> - the type
* @return false, if not to cache
*/
//return the cache entry to be stored, null if not
<T> CacheEntry<T> wouldAddToCache(Object k, CacheEntry<T> toCache, boolean updated);
//return false, if you do not want cache to be cleared
<T> boolean wouldClearCache(Class<T> affectedEntityType);
//return false, if you do not want entry to be removed from cache
<T> boolean wouldRemoveEntryFromCache(Object key, CacheEntry<T> toRemove, boolean expired);
}
This are special cache listeners which will be informed, when a cache needs to be updated because of incoming clear or update requests. There are two direct sub-interfaces:
WatchingCacheSyncListener
: to be used with WatchingCacheSynchronizer
MessagingCacheSyncListener
: to be used with MessagingCacheSynchronizer
The base interface is CacheSyncListener:
public interface CacheSyncListener {
/**
* before clearing cache - if cls == null whole cache
* Message m contains information about reason and stuff...
*/
@SuppressWarnings("UnusedParameters")
void preClear(Class cls) throws CacheSyncVetoException;
@SuppressWarnings("UnusedParameters")
void postClear(Class cls);
}
and the subclasses WatchingCacheSyncListener
(just adds one other method):
public interface WatchingCacheSyncListener extends CacheSyncListener {
void preClear(Class<?> type, String operation);
}
and the MessagingCacheSyncListener
which adds some Messaging based methods:
public interface MessagingCacheSyncListener extends CacheSyncListener {
/**
* Class is null for CLEAR ALL
*
* @param cls
* @param m - message about to be send - add info if necessary!
* @throws CacheSyncVetoException
*/
@SuppressWarnings("UnusedParameters")
void preSendClearMsg(Class cls, Msg m) throws CacheSyncVetoException;
@SuppressWarnings("UnusedParameters")
void postSendClearMsg(Class cls, Msg m);
}
As already mentioned, this listener is used to be informed about changes in your data.
public interface ChangeStreamListener {
/**
* return true, if you want to continue getting events.
*
* @param evt
* @return
*/
boolean incomingData(ChangeStreamEvent evt);
}
This one is one of the core functionalities of Morphium messaging, this is the placed to be informed about incoming messages:
public interface ChangeStreamListener {
/**
* return true, if you want to continue getting events.
*
* @param evt
* @return
*/
boolean incomingData(ChangeStreamEvent evt);
}
If you add a listener for these kind of events, you will be informed about any store via morphium. This is
kind of the same thing as the LifeCycle
annotation and the corresponding methods. But its a different
design pattern. If a MorphiumAccessVetoException
is thrown, the corresponding action is aborted.
public interface MorphiumStorageListener<T> {
void preStore(Morphium m, T r, boolean isNew) throws MorphiumAccessVetoException;
void preStore(Morphium m, Map<T, Boolean> isNew) throws MorphiumAccessVetoException;
@SuppressWarnings("UnusedParameters")
void postStore(Morphium m, T r, boolean isNew);
@SuppressWarnings("UnusedParameters")
void postStore(Morphium m, Map<T, Boolean> isNew);
@SuppressWarnings("UnusedParameters")
void preRemove(Morphium m, Query<T> q) throws MorphiumAccessVetoException;
@SuppressWarnings({"EmptyMethod", "UnusedParameters"})
void preRemove(Morphium m, T r) throws MorphiumAccessVetoException;
@SuppressWarnings("UnusedParameters")
void postRemove(Morphium m, T r);
@SuppressWarnings("UnusedParameters")
void postRemove(Morphium m, List<T> lst);
@SuppressWarnings("UnusedParameters")
void postDrop(Morphium m, Class<? extends T> cls);
@SuppressWarnings("UnusedParameters")
void preDrop(Morphium m, Class<? extends T> cls) throws MorphiumAccessVetoException;
@SuppressWarnings("UnusedParameters")
void postRemove(Morphium m, Query<T> q);
@SuppressWarnings({"EmptyMethod", "UnusedParameters"})
void postLoad(Morphium m, T o);
@SuppressWarnings({"EmptyMethod", "UnusedParameters"})
void postLoad(Morphium m, List<T> o);
@SuppressWarnings("UnusedParameters")
void preUpdate(Morphium m, Class<? extends T> cls, Enum updateType) throws MorphiumAccessVetoException;
@SuppressWarnings("UnusedParameters")
void postUpdate(Morphium m, Class<? extends T> cls, Enum updateType);
enum UpdateTypes {
SET, UNSET, PUSH, PULL, INC, @SuppressWarnings("unused")DEC, MUL, MIN, MAX, RENAME, POP, CURRENTDATE, CUSTOM,
}
}
there is a listener / watch functionality that works with older Mongodb installations. The OpLogListener is used by
the OplogMonitor
and uses the OpLog to inform about changes 8.
public interface OplogListener {
void incomingData(Map<String, Object> data);
}
If you need to gather performance data about your mongoDB setup, the Profiling listener has you covered. It gives detailed information about the duration of any write or read access:
public interface ProfilingListener {
void readAccess(Query query, long time, ReadAccessType t);
void writeAccess(Class type, Object o, long time, boolean isNew, WriteAccessType t);
}
The aggregation framework is a very powerful feature of MongoDB and Morphium supports it from the start9. But with Morphium V4.2.x we made use of it a lot easier.
Core of the aggregation Framework in Morphium is the Aggregator
. This will be created (using
the configured AggregatorFactory
) by a Morphium
instance.
Aggregator<Source,Result> aggregator=morphium.createAggregator(Source.class,Result.class);
This creates an aggregator that reads from the entity Source
(or better the corresponding collection)
and returns the results in Result
. Usually you will have to define a Result
entity in
order to use aggregation, but with Morphium V4.2 it is possible to have a Map
as a result
class.
After preparing the aggregator, you need to define the stages. All currently available stages are also available in Morphium. For a list of available stages, just consult the mongodb documentation.
In a nutshell, the aggregation framework runs all documents through a pipeline of commands, that either reduce the input (like a query), change the output (a projection) or calculate some values (like with sum count etc).
The most important pipeline stage is probably the "group" stage. This is similar to the group
by
in SQL, but more powerful, as you can have several of those group stages
in a pipeline.
here an Example with a simple pipeline:
Aggregator<UncachedObject, Aggregate> a = morphium.createAggregator(UncachedObject.class, Aggregate.class);
assert (a.getResultType() != null);
//reduce input
a = a.project("counter");
//Filter
a = a.match(morphium.createQueryFor(UncachedObject.class)
.f("counter").gt(100));
//Sort, used with $first/$last
a = a.sort("counter");
//limit data
a = a.limit(15);
//group by - here we only have one static group, but could be any field or value
a = a.group("all").avg("schnitt", "$counter").sum("summe", "$counter").sum("anz", 1).last("letzter", "$counter").first("erster", "$counter").end();
//result projection
HashMap<String, Object> projection = new HashMap<>();
projection.put("summe", 1);
projection.put("anzahl", "$anz");
projection.put("schnitt", 1);
projection.put("last", "$letzter");
projection.put("first", "$erster");
a = a.project(projection);
List<Aggregate> lst = a.aggregate();
assert (lst.size() == 1) : "Size wrong: " + lst.size();
log.info("Sum : " + lst.get(0).getSumme());
log.info("Avg : " + lst.get(0).getSchnitt());
log.info("Last : " + lst.get(0).getLast());
log.info("First: " + lst.get(0).getFirst());
log.info("count: " + lst.get(0).getAnzahl());
assert (lst.get(0).getAnzahl() == 15) : "did not find 15, instead found: " + lst.get(0).getAnzahl();
But you could have that result grouped again for example or add fields to it or change values or ....
Consult the MongoDB documentation for more information about the aggregation pipeline.
MongoDB has support for an own expression language, that is mainly used in aggregation. _Morphium_s representation
thereof is Expr
.
Expr
does have a lot of factory methods to create special Expr
instances, for example
Expr.string()
returns a string expression (string constant), Expr.gt()
creates the "greater
than" expression and so on.
Examples of expressions:
Expr e = Expr.add(Expr.field("the_field"), Expr.abs(Expr.field("test")), Expr.doubleExpr(128.0));
Object o = e.toQueryObject();
String val = Utils.toJsonString(o);
log.info(val);
assert(val.equals("{ \"$add\" : [ \"$the_field\", { \"$abs\" : [ \"$test\"] } , 128.0] } "));
e = Expr.in(Expr.doubleExpr(1.2), Expr.arrayExpr(Expr.intExpr(12), Expr.doubleExpr(1.2), Expr.field("testfield")));
val=Utils.toJsonString(e.toQueryObject());
log.info(val);
assert(val.equals("{ \"$in\" : [ 1.2, [ 12, 1.2, \"$testfield\"]] } "));
e = Expr.zip(Arrays.asList(Expr.arrayExpr(Expr.intExpr(1), Expr.intExpr(14)), Expr.arrayExpr(Expr.intExpr(1), Expr.intExpr(14))), Expr.bool(true), Expr.field("test"));
val=Utils.toJsonString(e.toQueryObject());
log.info(val);
assert(val.equals("{ \"$zip\" : { \"inputs\" : [ [ 1, 14], [ 1, 14]], \"useLongestLength\" : true, \"defaults\" : \"$test\" } } "));
e = Expr.filter(Expr.arrayExpr(Expr.intExpr(1), Expr.intExpr(14), Expr.string("asV")), "str", Expr.string("NEN"));
val=Utils.toJsonString(e.toQueryObject());
log.info(val);
assert(val.equals("{ \"$filter\" : { \"input\" : [ 1, 14, \"asV\"], \"as\" : \"str\", \"cond\" : \"NEN\" } } "));
the output of this little program would be:
{ "$add" : [ "$the_field", { "$abs" : [ "$test"] } , 128.0] }
{ "$in" : [ 1.2, [ 12, 1.2, "$testfield"]] }
{ "$zip" : { "inputs" : [ [ 1, 14], [ 1, 14]], "useLongestLength" : true, "defaults" : "$test" } }
{ "$filter" : { "input" : [ 1, 14, "asV"], "as" : "str", "cond" : "NEN" } }
This way you can create complex aggregation pipelines:
Aggregator<UncachedObject, Aggregate> a = morphium.createAggregator(UncachedObject.class, Aggregate.class);
assert (a.getResultType() != null);
a = a.project(Utils.getMap("counter", (Object) Expr.intExpr(1)).add("cnt2", Expr.field("counter")));
a = a.match(Expr.gt(Expr.field("counter"), Expr.intExpr(100)));
a = a.sort("counter");
a = a.limit(15);
a = a.group(Expr.string(null)).expr("schnitt", Expr.avg(Expr.field("counter"))).expr("summe", Expr.sum(Expr.field("counter"))).expr("anz", Expr.sum(Expr.intExpr(1))).expr("letzter", Expr.last(Expr.field("counter"))).expr("erster", Expr.first(Expr.field("counter"))).end();
This expression language can also be used in queries:
Query<UncachedObject> q = morphium.createQueryFor(UncachedObject.class);
q.expr(Expr.gt(Expr.field(UncachedObject.Fields.counter), Expr.intExpr(50)));
log.info(Utils.toJsonString(q.toQueryObject()));
List<UncachedObject> lst = q.asList();
assert (lst.size() == 50) : "Size wrong: " + lst.size();
for (UncachedObject u : q.q().asList()) {
u.setDval(Math.random() * 100);
morphium.store(u);
}
q = q.q().expr(Expr.gt(Expr.field(UncachedObject.Fields.counter), Expr.field(UncachedObject.Fields.dval)));
lst = q.asList();
Hint: if you use Expr
in your code, it is probably a good idea to use import static
de.caluga.morphium.aggregation.Expr.*;
to make the code easier to read and understand.
There are some places, you also might want to look at for additional information on mongodb or Morphium:
Messaging msg = new Messaging(morphium, 100, true);
msg.start();
MessagingCacheSynchronizer cs = new MessagingCacheSynchronizer(msg, morphium);
Query<Msg> q = morphium.createQueryFor(Msg.class);
long cnt = q.countAll();
assert (cnt == 0) : "Already a message?!?! " + cnt;
cs.sendClearMessage(CachedObject.class, "test");
Thread.sleep(2000);
waitForWrites();
cnt = q.countAll();
assert (cnt == 1) : "there should be one msg, there are " + cnt;
msg.terminate();
cs.detach();
@Test
public void nearTest() throws Exception {
morphium.dropCollection(Place.class);
ArrayList<Place> toStore = new ArrayList<Place>();
// morphium.ensureIndicesFor(Place.class);
for (int i = 0; i < 1000; i++) {
Place p = new Place();
List<Double> pos = new ArrayList<Double>();
pos.add((Math.random() * 180) - 90);
pos.add((Math.random() * 180) - 90);
p.setName("P" + i);
p.setPosition(pos);
toStore.add(p);
}
morphium.storeList(toStore);
Query<Place> q = morphium.createQueryFor(Place.class).f("position").near(0, 0, 10);
long cnt = q.countAll();
log.info("Found " + cnt + " places around 0,0 (10)");
List<Place> lst = q.asList();
for (Place p : lst) {
log.info("Position: " + p.getPosition().get(0) + " / " + p.getPosition().get(1));
}
}
@Index("position:2d")
@NoCache
@WriteBuffer(false)
@WriteSafety(level = SafetyLevel.MAJORITY)
@DefaultReadPreference(ReadPreferenceLevel.PRIMARY)
@Entity
public static class Place {
@Id
private ObjectId id;
public List<Double> position;
public String name;
public ObjectId getId() {
return id;
}
public void setId(ObjectId id) {
this.id = id;
}
public List<Double> getPosition() {
return position;
}
public void setPosition(List<Double> position) {
this.position = position;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
@Test
public void basicIteratorTest() throws Exception {
createUncachedObjects(1000);
Query<UncachedObject> qu = getUncachedObjectQuery();
long start = System.currentTimeMillis();
MorphiumIterator<UncachedObject> it = qu.asIterable(2);
assert (it.hasNext());
UncachedObject u = it.next();
assert (u.getCounter() == 1);
log.info("Got one: " + u.getCounter() + " / " + u.getValue());
log.info("Current Buffersize: " + it.getCurrentBufferSize());
assert (it.getCurrentBufferSize() == 2);
u = it.next();
assert (u.getCounter() == 2);
u = it.next();
assert (u.getCounter() == 3);
assert (it.getCount() == 1000);
assert (it.getCursor() == 3);
u = it.next();
assert (u.getCounter() == 4);
u = it.next();
assert (u.getCounter() == 5);
while (it.hasNext()) {
u = it.next();
log.info("Object: " + u.getCounter());
}
assert (u.getCounter() == 1000);
log.info("Took " + (System.currentTimeMillis() - start) + " ms");
}
@Test
public void asyncReadTest() throws Exception {
asyncCall = false;
createUncachedObjects(100);
Query<UncachedObject> q = morphium.createQueryFor(UncachedObject.class);
q = q.f("counter").lt(1000);
q.asList(new AsyncOperationCallback<UncachedObject>() {
@Override
public void onOperationSucceeded(AsyncOperationType type, Query<UncachedObject> q, long duration, List<UncachedObject> result, UncachedObject entity, Object... param) {
log.info("got read answer");
assert (result != null) : "Error";
assert (result.size() == 100) : "Error";
asyncCall = true;
}
@Override
public void onOperationError(AsyncOperationType type, Query<UncachedObject> q, long duration, String error, Throwable t, UncachedObject entity, Object... param) {
assert false;
}
});
waitForAsyncOperationToStart(1000000);
int count = 0;
while (q.getNumberOfPendingRequests() > 0) {
count++;
assert (count < 10);
System.out.println("Still waiting...");
Thread.sleep(1000);
}
assert (asyncCall);
}
@Test
public void asyncStoreTest() throws Exception {
asyncCall = false;
super.createCachedObjects(1000);
waitForWrites();
log.info("Uncached object preparation");
super.createUncachedObjects(1000);
waitForWrites();
Query<UncachedObject> uc = morphium.createQueryFor(UncachedObject.class);
uc = uc.f("counter").lt(100);
morphium.delete(uc, new AsyncOperationCallback<Query<UncachedObject>>() {
@Override
public void onOperationSucceeded(AsyncOperationType type, Query<Query<UncachedObject>> q, long duration, List<Query<UncachedObject>> result, Query<UncachedObject> entity, Object... param) {
log.info("Objects deleted");
}
@Override
public void onOperationError(AsyncOperationType type, Query<Query<UncachedObject>> q, long duration, String error, Throwable t, Query<UncachedObject> entity, Object... param) {
assert false;
}
});
uc = uc.q();
uc.f("counter").mod(3, 2);
morphium.set(uc, "counter", 0, false, true, new AsyncOperationCallback<UncachedObject>() {
@Override
public void onOperationSucceeded(AsyncOperationType type, Query<UncachedObject> q, long duration, List<UncachedObject> result, UncachedObject entity, Object... param) {
log.info("Objects updated");
asyncCall = true;
}
@Override
public void onOperationError(AsyncOperationType type, Query<UncachedObject> q, long duration, String error, Throwable t, UncachedObject entity, Object... param) {
log.info("Objects update error");
}
});
waitForWrites();
assert(morphium.createQueryFor(UncachedObject.class).f("counter").eq(0).countAll() > 0);
assert (asyncCall);
}
This document was written by the authors with most care, but there is no guarantee for 100% accuracy. If you have any questions, find a mistake or have suggestions for improvements, please contact the authors of this document and the developers of morphium via github.com/sboesebeck/morphium or send an email to sb@caluga.de
you can even use aggregation on it, to gather more information about your messages ↩︎
those throw an Exception to let you know, it is missing ↩︎
does only make sense, when there is more than one recipient usually ↩︎
attention: the "top level" document needs to be an Entity to have all necessary settings there. But "subdocuments"/properties might be just serializable ↩︎
text search and text indices can be disabled in mongoDB config. When creating the index, it would throw an Exception ↩︎
can be switched off in morphiumConfig ↩︎
as it takes some time for Morphium and mongo do determine if a cluster member is down, some requests might actually block ↩︎
also only works when connected to a replicaset ↩︎
does not work with the `InMemoryDriver' yet ↩︎
this blog is powered by Morphium and mongodb ↩︎
category: security
2014-03-06 - Tags: security
no english version available yet
category: data security
2013-05-15 - Tags:
no english version available yet
Das was jetzt mal wieder passiert ist, ist schon beinahe als Super-GAU zu bezeichnen und sollte eigentlich allen Datenschutz-Kritikern die Augen öffnen. Was ist passiert?
In den letzten Wochen gingen mehrere 10.000 Abmahnungen (vermutlich ca. 30.000 und es sollen noch mehr werden) wegen Urheberrechtschutzverletzungen im Internet an ahnungslose User. Das an sich wĂ€re ja nix besonderes, allerdings ist der Grund diesmal, dass man sich das urheberrechtlicht geschĂŒtzte âWerkâ auf einer Porno-Streaming-Plattform (redtube.com) angesehen haben soll. Bisher galt das Streamen nicht als Verbreiten von illegalen Inhalten (da man ja nichts verbreitet und sich die Datei auch nicht herunterlĂ€dt) und man konnte dem Benutzer auch normalerweise keine Klage bzw. Abmahnung ins Haus schicken. Es sei denn, es ist wirklich eindeutig, dass das ganze illegal ist (wie z.B. wenn man aktuelle Kinofilme oder Serien vor dem Deutschen TV-Start ansieht).
Hier ist der Sachverhalt anders. Redtube wird wohl von der Porno-Industrie als Werbeplattform verwendet, die dort angepriesenen Videos sollten also rechtefrei sein oder der Uploader tritt das Recht an redtube.com zum Zwecke des Streamens ab â wobei er sicherlich auch irgendwo zustimmen muss, dass er im Besitz dieser Rechte ist. Es war wohl in keiner Weise ersichtlich, ob und warum es sich bei den genannten Machwerken um urheberrechtlich geschĂŒtztes Material handeln sollte. Es unterschied sich nicht weiter von den weiteren Angeboten der Streaming Seite. Normalerweise auch nichts besonderes. Der Rechteinhaber verlangt die Herausgabe desjenigen, der das Video hochgeladen hat und kann sich an den wenden. Das wĂ€re aber bei weitem nicht so lukrativ wie mehrere 10.000 Leute abmahnen, von denen jeder mind. 200⏠Zahlen soll.Roblox HackBigo Live Beans HackYUGIOH DUEL LINKS HACKPokemon Duel HackRoblox HackPixel Gun 3d HackGrowtopia HackClash Royale Hackmy cafe recipes stories hackMobile Legends HackMobile Strike Hack
Es wurde wohl ein Antrag auf die Herausgabe der Privatadressen beim Landgericht Köln beantragt, und das hat man so formuliert, als handele es sich um eine Tauschbörse, nicht eine Streamingplattform. Auch die nachtrĂ€gliche BegrĂŒndung, beim Streamen hĂ€tte man am Ende die gesamte Datei auf der Platte, ist doch sehr fadenscheinig. Denn, zum einen wird die Datei beim Steramen nicht vollstĂ€ndig abgelegt und zum zweiten ist es wohl doch eher unwahrscheinlich, dass sich ein Konsument dieser Filme, diese wirklich bis zum Ende ansieht. Ich kann mir nicht vorstellen, dass die Handlung da besonders spannend ist, ihr versteht.
Wie kommen die netten RechtsanwĂ€lte denn ĂŒberhaupt an die IP-Adressen? Von Redtube haben sie die nicht, die haben sich sehr von dem Vorgehen distanziert und wollen ihrerseits eine Klage gegen die Rechtsanwaltskanzlei anstreben.
Die GerĂŒchte darĂŒber sind schon wirklich haarstrĂ€ubend und wenn nur ein Bruchteil davon stimmt, ist das schon wirklich extrem zwielichtig. So ist von extra dafĂŒr geschriebenen Viren bzw. Trojanern die Rede. Aber die weit wahrscheinlichere Variante ist die, dass man ein Werbebanner fĂŒr das IP-Tracking genutzt hat. Das bedeutet, man kann auf Redtube einen Filmausschnitt bzw. Trailer hochladen und kann den dann mit einem eigenen Werbebanner versehen, damit die Leute das Filmchen dann im Idealfall auch kaufen können. Dieses Werbebanner liegt dann bei demjenigen, der die Werbung schaltet, also auf einem anderen Server als Redtube. Und auf dem eigenen Server kann ich natĂŒrlich alle IP-Adressen mit protokollieren, die darauf zugreifen.
Das ist insofern zwielichtig, als dass dieses Banner ja wissentlich von dem, der das Video hochgeladen hat, auch irgendwie eingestellt werden. Und wenn es so war, dann muss er gewusst haben, dass es sich bei dem Video um ein urheberrechtlich geschĂŒtztes Werk handelt â Warum sollte er dann solch ein Banner schalten? Vor allem stelt sich wohl raus, dass das Video schon seit geraumer Zeit dort gestreamt werden durfte, aber erst kĂŒrzlich das Banner geschaltet wurde. Aber anstelle bei Redtube anzumahnen, dass der Film da zu sehen ist und somit die Löschung des Films von deren Servern zu verlangen, wird lieber ein Werbebanner geschaltet, welches es dann möglich macht, 10.000e User abzumahnen?
Erstaunlich ist diese Grafik (hier der original-Post dazu), welche deutlich zeigt, dass die Zugriffe auf die angemahnten Inhalte zufĂ€lligerweise genau in der Zeit sprunghaft angestiegen sind, in der die angeblichen UrheberrechtsverstöĂe stattgefunden haben. Und dass zufĂ€lligerweise genau 2 Tage zuvor die Domain fĂŒr das Werbebanner gekauft wurdeâŠ. Zufall?
Wie gesagt, es war fĂŒr den User wohl nicht ersichtlich, dass es sich um ein illegal zum Streamen freigegebenes âWerkâ handelt. Und eigentlich hĂ€tten die Adressen der User gar nicht rausgegeben werden dĂŒrfen.
Was hat das jetzt mit Datenschutz zu tun?
Es beweist wieder ein mal, dass Daten in den falschen HĂ€nden immer irgendwie zu Geld gemacht werden können oder zumindest eine Menge Geld kosten können. Denn, selbst wenn sich herausstellen sollte, dass diese Abmahnungen alle nicht rechtens waren (zum GlĂŒck sieht es momentan so aus â siehe auch hier) und man die Adressen zu den IPs gar nicht hĂ€tte rausgeben dĂŒrfen, selbst dann bleiben die Betroffenen auf Kosten von mehreren Hundert Euro sitzen! Die können sie sich zwar theoretisch vom Verursacher (also dem Rechteinhaber) wieder holen, aber das geht nur, wenn man es den Schdandensersatz einklagt. Und es ist fraglich, ob das wirklich so einfach geht, denn der Rechteinhaber sitzt wohl in der Schweiz. Und internationale Klagen sich teuer. Und den eigenen Rechtsanwalt muss man vorher bezahlen⊠Und wenn wirklich, wie es scheint eine Sammelklage gegen diese Abmahnungen erfolg haben sollte, dann ist die Firma leider auch schnell pleite und die GeschĂ€digten bekommen dann auch nichts oder nicht viel.
Ich bin auch der festen Ăberzeugung, dass das ganze nur deswegen probiert wurde, da es sich um Schmuddelkram handelt. Da werden bestimmt schon 1000e bezahlt haben, bevor das ganze ĂŒberhaupt öffentlich geworden ist â eben um die Ăffentlichkeit zu meiden und man will ja nicht mit so einem Kram in Verbindung gebracht werden. Und da kann man wohl eine Menge Geld machenâŠ
Das ganze ist eine One-Shot-Aktion, man versucht schnell mit der Unwissenheit und der Peinlichkeit der Geschichte Geld zu machen, wohl wissend, dass man sich da auf dĂŒnnem Eis bewegt. Die haben aber dennoch sicherlich schon einige Hunderttausend Euro eingenommen.
Kurz zusammengefasst: Weil jemandem die Daten peinlich sind und er nicht will, dass es öffentlich wird, wird bezahlt. Klingelt es da? Ich hatte Àhnliches als Schreckensszenario in einem meiner Posts beschrieben, und nun ist es RealitÀt geworden.
Ja, jetzt kommt das Hammer-Argument: Ich gucke keine Pornos im Internet.
Damit ist natĂŒrlich alles gut, und es ist nur das Problem einzelner.
ARG!!!! Darum geht es nicht! Es geht darum, dass du, einfach nur weil du einen falschen Link angeklickt hast (denn genau darum handelt es sich), ohne zu wissen, was sich dahinter verbirgt, plötzlich Schadenersatz zahlen musst. Wenn das wirklich legitimiert wird, ist das Internet tot! Dann kann man sich nicht mehr frei bewegen, kann keinen Link mehr anklicken, ohne vorher zu wissen, was sich dahinter verbirgt. Man stelle sich das mal vor, dass man auf einen Link klickt, auf der Seite wird ein Zitat von einem Buch verwendet und der Buchautor will das nicht â Zack, klage am Hals! Und selbst wenn vor Gericht geht und evtl. sogar Gewinnt â man muss die Kohle erst mal vorstrecken (kann gar nicht jeder) und dann muss man auch noch im Nachhinein auf Schadenersatz klagen, bei dem man evtl. auch verliert aber auf jeden Fall wieder Kosten vorstrecken muss.
Ich finde es wirklich erschreckend, wie schlampig da auch auf Seiten der Gerichte gearbeitet wurde, wie einfach jemand, mit genug âKreativitĂ€tâ an die Privatadressen von sorglosen Internetusern kommt. So etwas hĂ€tte es gar nicht geben dĂŒrfen. Der Schaden der Beteiligten ist auf jeden Fall da, denn ohne Rechtsanwalt kommt man aus der Sache leider nicht wieder raus!
Und die Politik will allen Ernstes gerade wieder die Vorratsdatenspeicherung einfĂŒhren? Das ist super, dann braucht man gar keine Werbebanner mehr, sondern kann sowas sicherlich auch rĂŒckwirkend machen und sich so bereichernâŠ. Ich glaube es hackt!
Und um das gleich klar zu stellen: Nein, ich bin nicht betroffen đ
Update: es scheint sich zu bestĂ€tigen, dass da wirklich eine absichtliche TĂ€uschung versucht wurde. Was das ganze natĂŒrlich in noch schlimmeres Licht rĂŒckt und wieder einmal zeigt, wie wichtig es ist, die PrivatsphĂ€re zu schĂŒtzen. Die Staatsanwaltschaft ermittelt wohl zurecht â alle Betroffenen sollten sich juristischen Rat holen!
category: Computer
2013-05-04 - Tags:
no english version available yet
category: global
2013-04-19 - Tags: about
originally posted on: https://boesebeck.name
Whose blog is this? Yes, who am I ... this question is amazingly difficult to answer, foolishly. My IT career can be read here on the blog under My IT History.
There is not much more to say. There will be very little private life here, so my professional career will be in a nutshell:
category: global
2011-09-14 - Tags: tweet
originally posted on: https://boesebeck.name
this is a German article about data preservation: http://t.co/vrQAaPk
found results: 19
<< 1 ... >>