Data with Bert logo

In-Memory OLTP: A Case Study

Watch this week's video on YouTube

When In-Memory OLTP was first released in SQL Server 2014, I was excited to start using it.  All I could think was "my queries are going to run so FAST!"

Well, I never got around to implementing In-Memory OLTP.  Besides having an incompatible version of SQL Server at the time, the in-memory features had too many limitations for my specific use-cases.

Fast forward a few years, and I've done nothing with In-Memory OLTP.  Nothing that is until I saw Erin Stellato present at our Northern Ohio SQL Server User Group a few weeks ago - her presentation inspired me to take a look at In-Memory OLTP again to see if I could use it.

Use case: Improving ETL staging loads

After being refreshed on the ins and outs of in-memory SQL Server, I wanted to see if I could apply some of the techniques to one of my etls.

The ETL consists of two major steps:

  1. Shred documents into row/column data and then dump that data into a staging table.
  2. Delete some of the documents from the staging table.

In the real world, there's a step 1.5 that does some processing of the data, but it's not relevant to these in-memory OLTP demos.

So step one was to create my staging tables.  The memory optimized table is called "NewStage1" and the traditional disked based tabled is called "OldStage1":

DROP DATABASE IF EXISTS InMemoryTest;
GO
CREATE DATABASE InMemoryTest;
GO
USE InMemoryTest;
GO

ALTER DATABASE InMemoryTest ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA;
GO 
ALTER DATABASE InMemoryTest ADD FILE (name='imoltp_mod1', filename='C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\DATA\imoltp_mod1') TO FILEGROUP imoltp_mod;
GO  
ALTER DATABASE InMemoryTest SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON;
GO 
ALTER DATABASE InMemoryTest SET RECOVERY SIMPLE
GO

--Numbers Table
-- This needs to be in-memory to be called from a natively compiled procedure
DROP TABLE IF EXISTS InMemoryTest.dbo.Numbers;
GO
CREATE TABLE InMemoryTest.dbo.Numbers
(
    n int
    INDEX ix_n NONCLUSTERED HASH (n) WITH (BUCKET_COUNT=400000)
)
WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY);  
GO 

INSERT INTO dbo.Numbers (n)
SELECT TOP (4000000) n = CONVERT(INT, ROW_NUMBER() OVER (ORDER BY s1.[object_id]))
FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2
OPTION (MAXDOP 1);

-- Set up on-disk tables
DROP TABLE IF EXISTS InMemoryTest.dbo.OldStage1;
GO
CREATE TABLE InMemoryTest.dbo.OldStage1
(
    Id int,
    Col1 uniqueidentifier,
    Col2 uniqueidentifier,
    Col3 varchar(1000),
    Col4 varchar(50),
    Col5 varchar(50),
    Col6 varchar(50),
    Col7 int,
    Col8 int,
    Col9 varchar(50),
    Col10 varchar(900),
    Col11 varchar(900),
    Col12 int,
    Col13 int,
    Col14 bit
);
GO
CREATE CLUSTERED INDEX CL_Id ON InMemoryTest.dbo.OldStage1 (Id);
GO


--  Set up in-memory tables and natively compiled procedures
DROP TABLE IF EXISTS InMemoryTest.dbo.NewStage1;
GO
CREATE TABLE InMemoryTest.dbo.NewStage1
(
    Id int,
    Col1 uniqueidentifier,
    Col2 uniqueidentifier,
    Col3 varchar(1000),
    Col4 varchar(50),
    Col5 varchar(50),
    Col6 varchar(50),
    Col7 int,
    Col8 int,
    Col9 varchar(50),
    Col10 varchar(900),
    Col11 varchar(900),
    Col12 int,
    Col13 int,
    Col14 bit
    INDEX ix_id NONCLUSTERED HASH (id) WITH (BUCKET_COUNT=10)
)
WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY);  
GO

Few things to keep in mind:

  • The tables have the same columns and datatypes, with the only difference being that the NewStage1 table is memory optimized.
  • My database is using simple recovery so I am able to perform minimal logging/bulk operations on my disk-based table.
  • Additionally, I'm using  the SCHEMA_ONLY durability setting.  This gives me outstanding performance because there is no writing to the transaction log!  However, this means if I lose my in-memory data for any reason (crash, restart, corruption, etc...) I am completely out of luck.  This is fine for my staging data scenario since I can easily recreate the data if necessary.

Inserting and deleting data

Next I'm going to create procedures for inserting and deleting my data into both my new and old staging tables:

DROP PROCEDURE IF EXISTS dbo.Insert_OldStage1;
GO
CREATE PROCEDURE dbo.Insert_OldStage1
    @Id int,
    @Rows int
AS
BEGIN
    INSERT INTO InMemoryTest.dbo.OldStage1 (Id, Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col13, Col14)
    SELECT Id, Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col13, Col14
    FROM
    (
    SELECT
    @Id as Id,
    '92D14DA3-2C55-4E50-A965-7D3C941417B3' as Col1,
    '92D14DA3-2C55-4E50-A965-7D3C941417B3' as Col2,
    'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' as Col3,
    'aaaaaaaaaaaaaaaaaaaa' as Col4,
    'aaaaaaaaaaaaaaaaaaaa' as Col5,
    'aaaaaaaaaaaaaaaaaaaa' as Col6,
    0 as Col7,
    0 as Col8,
    'aaaaaaaaaaaaaaaaaaaa' as Col9,
    'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' as Col10,
    'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' as Col11,
    1 as Col12,
    1 as Col13,
    1 as Col14
    )a
    CROSS APPLY
    (
    SELECT TOP (@Rows) n FROM dbo.Numbers
    )b
END

DROP PROCEDURE IF EXISTS dbo.Delete_OldStage1;
GO
CREATE PROCEDURE dbo.Delete_OldStage1
    @Id int
AS
BEGIN
    -- Use loop to delete to prevent filling transaction log
    DECLARE 
        @Count int = 0,
        @for_delete int,
        @chunk_size int = 1000000

    SELECT @for_delete = COUNT(Id) FROM InMemoryTest.dbo.OldStage1 
                            WHERE Id = @Id;

    WHILE (@Count < @for_delete)
    BEGIN
        SELECT @Count = @Count + @chunk_size;

        BEGIN TRAN
            DELETE TOP(@chunk_size) FROM InMemoryTest.dbo.OldStage1 WHERE Id = @Id
        COMMIT TRAN
    END
END;
GO



DROP PROCEDURE IF EXISTS dbo.Insert_NewStage1;
GO
CREATE PROCEDURE dbo.Insert_NewStage1
    @Id int,
    @Rows int
WITH NATIVE_COMPILATION, SCHEMABINDING  
AS   
BEGIN ATOMIC   
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')  

    INSERT INTO dbo.NewStage1 (Id, Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col13, Col14)
    SELECT Id, Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col13, Col14 
    FROM
    (
    SELECT
    @Id as Id,
    '92D14DA3-2C55-4E50-A965-7D3C941417B3' as Col1,
    '92D14DA3-2C55-4E50-A965-7D3C941417B3' as Col2,
    'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' as Col3,
    'aaaaaaaaaaaaaaaaaaaa' as Col4,
    'aaaaaaaaaaaaaaaaaaaa' as Col5,
    'aaaaaaaaaaaaaaaaaaaa' as Col6,
    0 as Col7,
    0 as Col8,
    'aaaaaaaaaaaaaaaaaaaa' as Col9,
    'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' as Col10,
    'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' as Col11,
    1 as Col12,
    1 as Col13,
    1 as Col14
    )a
    CROSS APPLY
    (
    SELECT TOP (@Rows) n FROM dbo.Numbers
    )b


END;   
GO  

DROP PROCEDURE IF EXISTS dbo.Delete_NewStage1;
GO
CREATE PROCEDURE dbo.Delete_NewStage1
    @Id int 
WITH NATIVE_COMPILATION, SCHEMABINDING  
AS   
BEGIN ATOMIC   
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')  

    DELETE FROM dbo.NewStage1 WHERE Id = @Id;

END   
GO  

Few more things to note:

  • My new procedures are natively compiled: SQL Server compiles them up front so at run time it can just execute without any extra steps.  The procedures that target my old disk-based tables will have to compile every time.
  • In the old delete procedure, I am deleting data in chunks so my transaction log doesn't get full.  In the new version of the procedure, I don't have to worry about this because, as I mentioned earlier, my memory optimized table doesn't have to use the transaction log.

Let's simulate a load

It's time to see if all of this fancy in-memory stuff is actually worth all of the restrictions.

In my load, I'm going to mimic loading three documents with around 3 million rows each.  Then, I'm going to delete the second document from each table:

-- Old on-disk method
-- Insert data for processing
EXEC InMemoryTest.dbo.Insert_OldStage1 @Id=1, @Rows=2500000;
GO
EXEC InMemoryTest.dbo.Insert_OldStage1 @Id=2, @Rows=3400000;
GO 
EXEC InMemoryTest.dbo.Insert_OldStage1 @Id=3, @Rows=2800000;
GO 

-- Delete set of records after processed
EXEC InMemoryTest.dbo.Delete_OldStage1 @Id = 2
GO

-- New in-memory method
-- Insert data for processing
EXEC InMemoryTest.dbo.Insert_NewStage1 @Id=1, @Rows=2500000;
GO
EXEC InMemoryTest.dbo.Insert_NewStage1 @Id=2, @Rows=3400000;
GO 
EXEC InMemoryTest.dbo.Insert_NewStage1 @Id=3, @Rows=2800000;
GO 

-- Delete set of records after processed
EXEC InMemoryTest.dbo.Delete_NewStage1 @Id = 2
GO

The in-memory version should have a significant advantage because:

  1. The natively compiled procedure is precompiled (shouldn't be a huge deal here since we are doing everything in a single INSERT INTO...SELECT).
  2. The in-memory table inserts/deletes don't have to write to the transaction log (this should be huge!)

Results

  -------------------- ---------------------------------------- -----------------------------------------
                                    **Disk-based**                            **In-Memory**
  INSERT 3 documents                    65 sec                                    6 sec
  DELETE 1 document                     46 sec                                    0 sec
  Total time                           111 sec                                    6 sec
  Difference                        -95% slower                                1750% faster
  -------------------- ---------------------------------------- -----------------------------------------

The results speak for themselves.  In this particular example, in-memory destroys the disk-based solution out of the water.

Obviously there are downsides to in-memory (like consuming a lot of memory) but if you are going for pure speed, there's nothing faster.

Warning! I am not you.

And you are not me.

While in-memory works great for my ETL scenario, there are many requirements and limitations.  It's not going to work in every scenario.  Be sure you understand the in-memory durability options to prevent any potential data loss and try it out for yourself!  You might be surprised by the performance gains you'll see.

AT TIME ZONE: The easy way to deal with time zones and daylight savings time

Watch this week's video on YouTube

Many people think daylight savings time was created to help align the hours that the sun is up with our waking hours so farmers and school children didn't have to be outside in the dark.  Or something like that.

Well let me fill you in on a little secret - daylight savings time was actually created by a government works project to ensure that programmers could forever write tedious conditional logic in their date-based queries to handle the date time conversions surrounding time zones and daylight savings time.

While these types of datetime conversions have historically been a pain, SQL Server 2016 introduced the AT TIME ZONE feature makes these types of conversions a breeze.

Defining Time Zone

Let's say you have a datetime value that you know is encoded in UTC (if you don't know what timezone your data was originally encoded in you're out of luck):

2018-03-26_11-54-21

Besides naming convention, there's nothing that tells us that our datetime is in UTC.  But if we know that to be the case, we can use AT TIME ZONE to add an UTC offset to the timestamp to make it official:

2018-03-26_11-55-07

See that +00:00  at the end of our value?  That's our time zone offset - it's basically telling us how many hours and minutes away from UTC our date is stored in.  With this offset in place, our UTC datetime isn't encoded only by the variable name - it's actually encoded in the data itself.

Converting Time Zones

Now that we have a well-defined UTC datetime, we can use AT TIME ZONE to easily convert our data to different time zones.

For example, if we want to convert our UTC datetime to Eastern Standard Time, we can tack on  AT TIME ZONE 'Eastern Standard Time' :

2018-03-26_12-03-37

The time portion of our datetime is now showing 7am with the offset indicating we are 4 hours behind UTC.  Easy time zone conversions, yes!

But What About Our Farmer Friends?

So AT TIME ZONE makes it easy to convert between time zones - but how does it handle daylight savings time conversions?

Well, on March 11, 2018 I lost an hour of my life when daylight savings time kicked in at 2am.  Does AT TIME ZONE recognize this theft?

2018-03-26_12-22-07

YES! Say so long to conditional conditions that subtract an amount of time based on the date; AT TIME ZONE handles the switch to DST with ease as noted by the offset change from -05:00  to -04:00 .

And just like that, programming for time zone and day light savings time logic just became a little easier.

But I Don't Live In The Eastern Standard Time Zone!

No problem, in addition to AT TIME ZONE, SQL Server 2016 added this nifty table that will give you the names of all of the time zones it supports:

select * from sys.time_zone_info

2018-03-26_12-28-54

Full Automatic Tuning: SQL Server 2026's Most Killer Feature

MJ-t-sql-TuesdayThis post is a response to this month's T-SQL Tuesday #100 prompt by the creator of T-SQL Tuesday himself, Adam Machanic.   T-SQL Tuesday is a way for SQL Server bloggers to share ideas about a different database or professional topic every month.

This month I'm going down the science fiction route and pretending that I'm writing about a new SQL Server feature in 2026.


Fully Automatic Tuning

Watch this week's video on YouTube

I was really excited when automatic tuning capabilities were first introduced in SQL Server 2017.  I couldn't wait to say so-long to the days where I had to spend time fixing basic, repetitive query tuning problems.

And while those first versions of automatic plan choice corrections were fine, there was a lot left to be desired...

Fortunately, Microsoft has fully leveraged its built-in R and Python services to allow for advanced automatic tuning to make the life of SQL Server DBAs and developers that much easier.

On By Default

Perhaps the coolest part of these new automatic tuning capabilities is that they are on by default.  What this means is that databases will seem to perform better right out of the box without any kind of intervention.

I think the fact that Microsoft is confident enough to enable this by default in the on-premise version of SQL Server shows how confident they are in the capabilities of these features.

Optimize For Memory and Data Skew

While the first iterations of automatic query tuning involved swapping out query plans when SQL Server found a regression in CPU performance, the new automatic plan correction is able to factor in many more elements.

For example, instead of optimizing for CPU usage, setting the new flag OPTIMIZE_FOR_MEMORY = ON  allows SQL server to minimize memory usage instead.

Also, with the addition of the new  "Optimized" cardinality estimator (so now we have "Legacy", "New", and "Optimized" -  who's in charge of naming these things???) SQL Server is able to swap out different estimators at the query level in order to generate better execution plans!

What time is it?

Another new addition to automatic plan corrections is SQL Server's ability to choose an appropriate execution plan based on historical time-of-day server usage.

So if a query is executing during a lull period on the server, SQL Server is intelligent enough to realize this and choose a plan that is more resource intensive.  This means faster query executions at the cost of a more intensive plan - but that's OK since the server isn't being fully utilized during those times anyway.

Making use of hardware sensors

As the world continues to include more data collecting sensors everywhere, SQL Server makes good use of these data points in 2026.

Tying into the server's CPU and motherboard temperature sensors, SQL Server is able to negotiate with the OS and hardware to allow for dynamic CPU overclocking based on server demands.

While this option is not turned on by default, enabling dynamic overclocking allows for SQL Server to give itself a CPU processing boost when necessary, and then dial back down to more stable levels once finished.

This obviously won't be a feature used by everybody, but users who are willing to trade off some stability for additional analytical processing performance will love this feature.

How I Stopped Worrying And Learned To Love Automatic Tuning

At the end of the day, we are our own worst enemies.  Even with the latest and greatest AI technology, we are capable of writing queries so terrible that even the smartest machine learning algorithms can't grasp.

While SQL Server's automatic tuning features work wonderfully on the boring and mundane performance problems, there are still plenty of performance problems that it leaves for us to troubleshoot.

And I love that.  Let the software optimize itself and maintain a "good enough" baseline while letting me play with the really fun performance problems.

I'm sure these features will continue to evolve - but so will we, working on new problems and facing new data challenges.

OR vs UNION ALL - Is One Better For Performance?

Today I want to show you a trick that could make your queries run faster.

It won't always work, but when it does everyone will be impressed with your performance tuning prowess.  Let's go!

Watch this week's video on YouTube

Our Skewed Data

Let's create a table and insert some data.

Notice the heavily skewed value distribution.  Also notice how we have a clustered index and a very skimpy nonclustered index:

DROP DATABASE IF EXISTS ORUnionAll
CREATE DATABASE ORUnionAll
GO

CREATE TABLE ORUnionAll.dbo.TestData
(
    Col1 int,
    Col2 char(200),
    Col3 int 
)
GO

INSERT INTO ORUnionAll.dbo.TestData VALUES (1,'',1)
GO 10000
INSERT INTO ORUnionAll.dbo.TestData VALUES (2,'',2)
GO 50
INSERT INTO ORUnionAll.dbo.TestData VALUES (3,'',3)
GO 50

CREATE CLUSTERED INDEX CL_Col1 ON ORUnionAll.dbo.TestData ( Col1 )
GO

CREATE NONCLUSTERED INDEX IX_Col3 ON ORUnionAll.dbo.TestData (Col3)
GO

If we write a query that filters on one of the low-occurrence values in Col3, SQL Server will perform an index seek with a key lookup (since our skimpy nonclustered index doesn't cover all of the columns in our SELECT):

SELECT 
    Col2, Col3
FROM 
    ORUnionAll.dbo.TestData
WHERE
    Col3 = 2

index-seek-with-lookup

If we then add an OR to our WHERE clause and filter on another low-occurrence value in Col3, SQL Server changes how it wants to retrieve results:

SELECT 
    Col2, Col3
FROM 
    ORUnionAll.dbo.TestData
WHERE
    Col3 = 2 OR Col3 = 3

index-scan-with-ors

Suddenly those key-lookups become too expensive for SQL Server and the query optimizer thinks it'll be faster to just scan the entire clustered index.

In general this makes sense; SQL Server tries to pick plans that are good enough in most scenarios, and in general I think it chooses wisely.

However, sometimes SQL Server doesn't pick great plans.  Sometimes the plans it picks are downright terrible.

If we encountered a similar scenario in the real-world where our tables had more columns, more rows, and larger datatypes, having SQL Server switch from a seek to a scan could kill performance.

So what can we do?

Solutions...maybe

The first thing that comes to mind is to modify or add some indexes.

But maybe our (real-world) table already has too many indexes.  Or maybe we are working with a data source where we can't modify our indexes.

We could also use the FORCESEEK hint, but I don't like using hints as permanent solutions because they feel dirty (and are likely to do unexpected things as your data changes).

One solution to UNION ALL

One solution that a lot of people overlook is rewriting the query so that it uses UNION ALLs instead of ORs.

A lot of the time it's pretty easy to refactor the query to multiple SELECT statements with UNION ALLs while remaining logically the same and returning the same results:

SELECT 
    Col2, Col3
FROM 
    ORUnionAll.dbo.TestData
WHERE 
    Col3 = 2
UNION ALL
SELECT 
    Col2, Col3
FROM 
    ORUnionAll.dbo.TestData
WHERE
    Col3 = 3

Sure, the query is uglier and will be a bigger pain to maintain if you need to make changes in the future, but sometimes we have to suffer for ~~fashion~~ query performance.

But does our UNION ALL query perform better?

union-alls-with-seeks

Well the plan shows seeks, but as Erik Darling recently pointed out, seeks aren't always a good thing.

So let's compare the reads of the OR query versus the UNION ALL query using SET STATISTICS IO ON:

logical-reads

So in this case, tricking SQL Server to pick a a different plan by using UNION ALLs gave us a performance boost.  The difference in reads isn't that large in the above scenario, but I've had this trick take my queries from minutes to seconds in the real world.

So the next time you are experiencing poor performance from a query with OR operators in it, try rewriting it using UNION ALLs.

It's not always going to fix your performance problem but you won't know until you give it a try.

What I Do When I'm NOT Writing SQL

MJ-t-sql-TuesdayThis post is a response to this month's T-SQL Tuesday prompt created by Aaron BertrandAdam Machanic created T-SQL Tuesday as a way for SQL users to share ideas about interesting topics every month.  This month's topic is "Your passions outside of SQL Server".


Getting away

I spend most of my days in front of a computer.

While this is great because I love technology, a few times a year I like separating myself from all forms of semiconductors. I do this by going camping.

Watch this week's video on YouTube

I highly recommend watching this week's video on YouTube - I was able to include way more photos and video clips there!

Camping is a mental reset where I can focus on activities that are completely opposite of what I do every day. It involves different challenges and skill sets, and provides a literal breath of fresh air when compared to working in a typical office environment.

How did I come to enjoy sleeping on the ground and learning to survive with only 20lbs of gear? Let me describe three distinct eras in my camping evolution.

I didn't grow up camping

film-only-2 High school Bert's first camping trip in Vermont

My family never went on camping trips. I always enjoyed the outdoors, but I never learned any basic outdoors skills like you might learn in scouts.

My friends during high school came from similar upbringings. We didn't know anything about building campfires or sleeping in tents, but we wanted to learn.

So, during the summer before college we decided we'd drive up to Vermont, rent a campsite at a state park, and learn to live outdoors.

While an immensely fun trip, our skills were lacking since we basically ended up eating ramen and sleeping in our cars (because our incorrectly set up tent gushed with rainwater during the middle of the night).

We kept at it though, going back every summer, until we managed to figure out how things should work. After a few trips we were no longer eating just ramen and not needing to sleep in our cars anymore.

Have tent, will travel

My outdoor skills improved as I continued to camp through my college years; I learned to cook food over coals, stay dry during the harshest rain storms, and light fires with a single match instead of half a can of camping stove fuel.

During this time I also realized that could travel almost anywhere and have my accommodations cost less than \$20 per night PER CAMPSITE.

this-waters-too-warm-and-clear I must have been thinking "the water is too clear and warm for this New Englander"

This meant my friends (and future wife Renee) spent college spring break camping through the Everglades and the Florida Keys. We spent our days kayaking, zip lining, and eating key lime pies before returning to our ocean front campsite for the evening. I think in all that trip cost us less than \$400 per person for 9 days of fun, including gas for the minivan.

the-backpacking-view-3 Ultra cheap vacation to Yellowstone National Park

Ultra cheap vacations continued and I now graduated to cooking single pot meals and foil packet dinners. My car was still parked nearby but I wasn't having to sleep in it at night.

I even ventured back to Vermont and proposed to Renee there while camping (she said yes even after three days of no showers. That's when I knew she was a keeper).

Backpacking and gourmet food

better-than-any-hotel Everything we need, carried in and carried out.

Camping eventually evolved into backpacking - instead of having a car a few feet away with a cooking stove and cooler of cold food, now I was learning about how to go into the wilderness for days with everything I would need carried in on my back.

Backpacking is truly exciting. You get to go and see places only accessible by a long hike on foot. The trips require more planning, both in terms of hiking routes, food preparation, water scouting, wildlife management, etc.... all things that invigorate my researching, detail oriented personality.

Living out of a backpack also means that you can easily fly to locations and still have a cheap vacation. I got to see beautiful National Parks like Yellowstone, the Grand Tetons, and Shenandoah all because I knew how to fit everything I need to survive into a carry-on and personal item (except for stove fuel, bear spray, and pocket knife ...some things you just have to buy on location).

Not only is visiting these places cool to begin with, but camping overnight in them means you get to see the park early before any car driving tourists arrive and you get to stay out way longer after they all leave the park to go grab dinner.

By this point my car is parked miles away from where I am sleeping and my food game has stepped up. Although I am limited to carrying everything I need in a pack, we frequently dine on pad thai, pizza, and cinnamon rolls.

pizza After a long day spent hiking...

cinnamon-rolls ... these are the only foods I need to eat.

Who says you need to be roughing it while camping?