What's up with sql "commit"?

Does anyone know seinfield voice what’s the deal with sql commit?

I had a very odd circumstance the other day, where i’d added rows to a table in my schema in sql, but didn’t use “commit;” at the end of my statement. The result, was in one sql window when i did select * from the table, the new rows would appear, when i did select * from a different sql window, the tables would not appear.

Seems to me if my sql is correct for adding rows, the “commit” shouldn’t be necessary.

Anyone else have experience with this? It was an odd event.

Going only off of memory here (too lazy to google), but I thought when you commit it locks the table so others can’t insert/change the data in the table. Is that right?

As I understand it, you have a “local view” of the database. One where transactions to the “main” (or permanent) database are staged. So when you spin up that local session, you get a “copy” of the “main” database plus all of the transactions staged to be pushed (i.e., committed) to the permanent database.

The “commit process” allows multiple people to access the same source (via the “local views”), test and check the changes they need to make; and then “commit” those changes to the permanent database. During the commit process, the main database is essentially locked to prevent more than one set of changes to take place while the update(s) are taking place.

And “commit” isn’t necessarily an SQL thing so much as a feature of the underlying database. I will add the line “for fetch only” to the end of my SQL code to let the database know that I’m not looking to make changes. This saves a lot of time since the server doesn’t have to allocate resources to create a staging “area” and arrange things to be committed.

I see this same sort of process when I’m working with Git on updating code and files.