- 0
maintain a postgresql database using osm2pgsql
-
Recently Browsing 0 members
- No registered users viewing this page.
-
Posts
-
By tsupersonic · Posted
That’s why I love classic Notepad from W10 IOT LTSC. It opens up instantly, uses very little resources, is just a plain notepad and nothing more. the title should be notepad has lost focus. -
By dismuter · Posted
Seeing my profile picture in Notepad creeped me out. I do not appreciate that Copilot is opt-out rather than opt-in. I don't want Notepad to have connectivity. The other improvements are fine, including the upcoming Markdown support (although I hate Markdown's handling of newlines with a passion). -
By theefool · Posted
More like Trelane: https://memory-alpha.fandom.com/wiki/Trelane But, yes, Q. -
By fishnet37222 · Posted
This would come in handy to me as an Uber driver. My biggest pet peeve is when passengers listen to their own audio on their phones without using headphones when they can hear I'm already playing music through my car's head unit. -
By hellowalkman · Posted
Astonishing new tech could kill headphones forever as it bends sound right into ears by Sayan Sen Imagine being able to listen to your favorite song or podcast out loud without disturbing anyone nearby even without wearing headphones. That’s what a team at Penn State University has been working on. Led by acoustics professor Yun Jing, they’ve come up with a clever way to create invisible audio zones called “audible enclaves” where sound can be heard only at one exact spot. They use ultrasound, which is normally inaudible to people, along with something called acoustic metasurfaces—tiny lenses that can bend sound in specific directions. By combining two ultrasound beams that travel in curved paths and meet at a single point, they’re able to make sound audible only at that intersection. As Jing explained, “The person standing at that point can hear sound, while anyone standing nearby would not. This creates a privacy barrier between people for private listening.” To make this happen, the system includes two ultrasonic speakers and the metasurface lenses, which were 3D printed by Xiaoxing Xia from Lawrence Livermore National Lab. Each beam has a slightly different frequency, and when they meet, a local reaction makes the sound audible. Neither beam is loud on its own—the sound only forms at that shared point. Jia-Xin “Jay” Zhong, one of the researchers, shared how they tested the idea: “We used a simulated head and torso dummy with microphones inside its ears to mimic what a human being hears at points along the ultrasonic beam trajectory, as well as a third microphone to scan the area of intersection. We confirmed that sound was not audible except at the point of intersection, which creates what we call an enclave.” One of the biggest advantages of their approach is that it works across a wide range of sound frequencies—between 125 Hz and 4 kHz, which covers most of what people can hears. Even in rooms where sound usually bounces around, their system held up well. And it’s surprisingly compact too: the whole setup measures about 16 centimeters, roughly the size of a pencil case. “We essentially created a virtual headset,” Zhong said. In practice, it means that someone standing in the audible enclave can hear what’s being played clearly, while everyone else around hears nothing at all. That could be especially useful in shared spaces like cars, classrooms, or open offices. Right now, the sound can travel about one meter and hits around 60 decibels which is similar to regular talking volume. The team believes they can push those limits further by using stronger ultrasound. All this might seem futuristic, but it’s grounded in solving a basic problem: how to direct sound only where it’s needed. If you’re into tech and sound design, this could open up a whole new world of personalized audio experiences. Source: Penn State, PNAS | Image via Depositphotos This article was generated with some help from AI and reviewed by an editor. Under Section 107 of the Copyright Act 1976, this material is used for the purpose of news reporting. Fair use is a use permitted by copyright statute that might otherwise be infringing.
-
-
Recent Achievements
-
leoniDAM earned a badge
First Post
-
Ian_ earned a badge
Reacting Well
-
Ian_ earned a badge
One Month Later
-
MacDaddyAz earned a badge
Dedicated
-
cekicen went up a rank
Explorer
-
-
Popular Contributors
-
Tell a friend
Question
tarifa
hi there good day dear friends,
I am trying to maintain a postgresql database using osm2pgsql to maintain my database.
idea: I want to track the changes made by the daily replication files. do i need to create triggers that fired after delete, update and insert. should i do a daily update process.
the idea: having a OSM dataset on Postgresql via osm2pgsql. Now I am trying to query all public buildings in UK such as
- hospitals,
- schools,
- fire stations and
- churches.
- gasoline stations and so on and so forth
Well in order to do that, I use something like this query:
well - how do you think about this approach? Is is a good method do do so!?
By looking at the results, it just extracts some of the existing hospitals. i am pretty sure that there are much more hospitals. I can see the red plus logo for hospitals that I also know that they exist there. But it does not show them in the query results.
Question: How can I include all of these buildings?
assumption: i guess that i am missing those hospitals mapped as areas. We have to run the same query on planet_osm_polygon as well, or we could construct a "union" query: This would come with alot of benefits for the request: That gives us the centrepoints of polygons in addition to the points we have above.
i guess that we need to use the amenity = 'hospital' from the polygon table and union all with the points. The benefit: it would gives a few more hospitals. The next step needs to be to find out the node, way or relation ID of a hospital that we're missing and query our database for it.
btw: can i do this with Python - working on the Overpass API
what about the query OSM data with the Overpass API, but how can we use this data now? a idea and a method to download the data is by using the command line tools curl or wget. In order to do this we need to access one of the Overpass API endpoints, where the one we will look go by the format http://overpass-api.de/api/interpreter?data=query. When using curl we can download the OSM XML of our query by running the command
well - the previously crafted query comes after data= and the query needs to be urlencoded. The --globoff is important in order to use square and curly brackets without being interpreted by curl. This query returns the following XML result
regarding the methods to get the data which method is better and more appropiate?
regarding the formats: well we have to say: there are various output formats to choose from in the documentation. In order to download the query result as JSON we need to add [out:json]; to the beginning of our query as in the command:
...giving us the previous XML result in JSON format. You can test the query also in the browser by accessing
http://overpass-api.de/api/interpreter?data=[out:json];node(1);out;.
which way would you go?
look forward to hear from you
Edited by tarifaLink to comment
https://www.neowin.net/forum/topic/1398543-maintain-a-postgresql-database-using-osm2pgsql/Share on other sites
1 answer to this question
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now