25/07/01 12:34:33 WARN Utils: Your hostname, lunnen resolves to a loopback address: 127.0.1.1; using 100.64.88.58 instead (on interface wlo1)
25/07/01 12:34:33 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
25/07/01 12:34:34 INFO SparkContext: Running Spark version 3.5.6
25/07/01 12:34:34 INFO SparkContext: OS info Linux, 6.14.0-22-generic, amd64
25/07/01 12:34:34 INFO SparkContext: Java version 21.0.7
25/07/01 12:34:34 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
25/07/01 12:34:34 INFO ResourceUtils: ==============================================================
25/07/01 12:34:34 INFO ResourceUtils: No custom resources configured for spark.driver.
25/07/01 12:34:34 INFO ResourceUtils: ==============================================================
25/07/01 12:34:34 INFO SparkContext: Submitted application: IcebergBranchesDemo
25/07/01 12:34:34 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
25/07/01 12:34:34 INFO ResourceProfile: Limiting resource is cpu
25/07/01 12:34:34 INFO ResourceProfileManager: Added ResourceProfile id: 0
25/07/01 12:34:34 INFO SecurityManager: Changing view acls to: maksim
25/07/01 12:34:34 INFO SecurityManager: Changing modify acls to: maksim
25/07/01 12:34:34 INFO SecurityManager: Changing view acls groups to:
25/07/01 12:34:34 INFO SecurityManager: Changing modify acls groups to:
25/07/01 12:34:34 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: maksim; groups with view permissions: EMPTY; users with modify permissions: maksim; groups with modify permissions: EMPTY
25/07/01 12:34:34 INFO Utils: Successfully started service 'sparkDriver' on port 38163.
25/07/01 12:34:34 INFO SparkEnv: Registering MapOutputTracker
25/07/01 12:34:34 INFO SparkEnv: Registering BlockManagerMaster
25/07/01 12:34:34 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
25/07/01 12:34:34 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
25/07/01 12:34:34 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
25/07/01 12:34:34 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-d60e93b5-459f-4cc2-ad8f-dd9efc65a18e
25/07/01 12:34:34 INFO MemoryStore: MemoryStore started with capacity 434.4 MiB
25/07/01 12:34:34 INFO SparkEnv: Registering OutputCommitCoordinator
25/07/01 12:34:34 INFO JettyUtils: Start Jetty 0.0.0.0:4040 for SparkUI
25/07/01 12:34:34 INFO Utils: Successfully started service 'SparkUI' on port 4040.
25/07/01 12:34:34 INFO SparkContext: Added JAR file:/home/maksim/SynologyDrive/dev/IcebergBranch/build/libs/IcebergBranch-1.0-SNAPSHOT-all.jar at spark://100.64.88.58:38163/jars/IcebergBranch-1.0-SNAPSHOT-all.jar with timestamp 1751362474314
25/07/01 12:34:34 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://100.64.88.58:7077...
25/07/01 12:34:34 INFO TransportClientFactory: Successfully created connection to /100.64.88.58:7077 after 15 ms (0 ms spent in bootstraps)
25/07/01 12:34:34 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20250701123434-0005
25/07/01 12:34:34 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20250701123434-0005/0 on worker-20250701113208-100.64.88.58-43133 (100.64.88.58:43133) with 16 core(s)
25/07/01 12:34:34 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 40463.
25/07/01 12:34:34 INFO NettyBlockTransferService: Server created on 100.64.88.58:40463
25/07/01 12:34:34 INFO StandaloneSchedulerBackend: Granted executor ID app-20250701123434-0005/0 on hostPort 100.64.88.58:43133 with 16 core(s), 1024.0 MiB RAM
25/07/01 12:34:34 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
25/07/01 12:34:35 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 100.64.88.58, 40463, None)
25/07/01 12:34:35 INFO BlockManagerMasterEndpoint: Registering block manager 100.64.88.58:40463 with 434.4 MiB RAM, BlockManagerId(driver, 100.64.88.58, 40463, None)
25/07/01 12:34:35 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 100.64.88.58, 40463, None)
25/07/01 12:34:35 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 100.64.88.58, 40463, None)
25/07/01 12:34:35 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20250701123434-0005/0 is now RUNNING
25/07/01 12:34:35 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
25/07/01 12:34:35 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir.
25/07/01 12:34:35 INFO SharedState: Warehouse path is 'file:/home/maksim/hdp/spark-3.5.6/spark-warehouse'.
25/07/01 12:34:36 INFO StandaloneSchedulerBackend$StandaloneDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (100.64.88.58:53648) with ID 0, ResourceProfileId 0
25/07/01 12:34:36 INFO BlockManagerMasterEndpoint: Registering block manager 100.64.88.58:35945 with 434.4 MiB RAM, BlockManagerId(0, 100.64.88.58, 35945, None)
25/07/01 12:34:39 INFO HiveConf: Found configuration file null
25/07/01 12:34:39 INFO metastore: Trying to connect to metastore with URI thrift://100.64.88.101:9083
25/07/01 12:34:39 INFO metastore: Opened a connection to metastore, current connections: 1
25/07/01 12:34:39 INFO metastore: Connected to metastore.
25/07/01 12:34:40 INFO BaseMetastoreCatalog: Table properties set at catalog level through catalog properties: {}
25/07/01 12:34:40 INFO BaseMetastoreCatalog: Table properties enforced at catalog level through catalog properties: {}
25/07/01 12:34:40 INFO CodeGenerator: Code generated in 97.248911 ms
25/07/01 12:34:40 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 32.0 KiB, free 434.4 MiB)
25/07/01 12:34:40 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 29.0 KiB, free 434.3 MiB)
25/07/01 12:34:40 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 100.64.88.58:40463 (size: 29.0 KiB, free: 434.4 MiB)
25/07/01 12:34:40 INFO SparkContext: Created broadcast 0 from broadcast at SparkWrite.java:193
25/07/01 12:34:40 INFO OverwriteByExpressionExec: Start processing data source write support: IcebergBatchWrite(table=default.btc, format=PARQUET). The input RDD has 1 partitions.
25/07/01 12:34:40 INFO SparkContext: Starting job: saveAsTable at IcebergBranchesDemo.java:35
25/07/01 12:34:40 INFO DAGScheduler: Got job 0 (saveAsTable at IcebergBranchesDemo.java:35) with 1 output partitions
25/07/01 12:34:40 INFO DAGScheduler: Final stage: ResultStage 0 (saveAsTable at IcebergBranchesDemo.java:35)
25/07/01 12:34:40 INFO DAGScheduler: Parents of final stage: List()
25/07/01 12:34:40 INFO DAGScheduler: Missing parents: List()
25/07/01 12:34:40 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at saveAsTable at IcebergBranchesDemo.java:35), which has no missing parents
25/07/01 12:34:40 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 13.1 KiB, free 434.3 MiB)
25/07/01 12:34:40 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 7.0 KiB, free 434.3 MiB)
25/07/01 12:34:40 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 100.64.88.58:40463 (size: 7.0 KiB, free: 434.4 MiB)
25/07/01 12:34:40 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1611
25/07/01 12:34:40 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at saveAsTable at IcebergBranchesDemo.java:35) (first 15 tasks are for partitions Vector(0))
25/07/01 12:34:40 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
25/07/01 12:34:40 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (100.64.88.58, executor 0, partition 0, PROCESS_LOCAL, 9042 bytes)
25/07/01 12:34:40 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 100.64.88.58:35945 (size: 7.0 KiB, free: 434.4 MiB)
25/07/01 12:34:50 WARN GarbageCollectionMetrics: To enable non-built-in garbage collector(s) List(G1 Concurrent GC), users should configure it(them) to spark.eventLog.gcMetrics.youngGenerationGarbageCollectors or spark.eventLog.gcMetrics.oldGenerationGarbageCollectors
25/07/01 12:36:01 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 100.64.88.58:35945 (size: 29.0 KiB, free: 434.4 MiB)
25/07/01 12:36:03 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 82885 ms on 100.64.88.58 (executor 0) (1/1)
25/07/01 12:36:03 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
25/07/01 12:36:03 INFO DAGScheduler: ResultStage 0 (saveAsTable at IcebergBranchesDemo.java:35) finished in 82,941 s
25/07/01 12:36:03 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
25/07/01 12:36:03 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
25/07/01 12:36:03 INFO DAGScheduler: Job 0 finished: saveAsTable at IcebergBranchesDemo.java:35, took 82,965024 s
25/07/01 12:36:03 INFO OverwriteByExpressionExec: Data source write support IcebergBatchWrite(table=default.btc, format=PARQUET) is committing.
25/07/01 12:36:03 INFO SparkWrite: Committing overwrite by filter true with 1 new data files to table default.btc
25/07/01 12:36:04 INFO SnapshotProducer: Committed snapshot 8457873691194474515 (BaseOverwriteFiles)
25/07/01 12:36:04 INFO LoggingMetricsReporter: Received metrics report: CommitReport{tableName=default.btc, snapshotId=8457873691194474515, sequenceNumber=1, operation=overwrite, commitMetrics=CommitMetricsResult{totalDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT0.540575425S, count=1}, attempts=CounterResult{unit=COUNT, value=1}, addedDataFiles=CounterResult{unit=COUNT, value=1}, removedDataFiles=null, totalDataFiles=CounterResult{unit=COUNT, value=1}, addedDeleteFiles=null, addedEqualityDeleteFiles=null, addedPositionalDeleteFiles=null, addedDVs=null, removedDeleteFiles=null, removedEqualityDeleteFiles=null, removedPositionalDeleteFiles=null, removedDVs=null, totalDeleteFiles=CounterResult{unit=COUNT, value=0}, addedRecords=CounterResult{unit=COUNT, value=409042}, removedRecords=null, totalRecords=CounterResult{unit=COUNT, value=409042}, addedFilesSizeInBytes=CounterResult{unit=BYTES, value=3162638}, removedFilesSizeInBytes=null, totalFilesSizeInBytes=CounterResult{unit=BYTES, value=3162638}, addedPositionalDeletes=null, removedPositionalDeletes=null, totalPositionalDeletes=CounterResult{unit=COUNT, value=0}, addedEqualityDeletes=null, removedEqualityDeletes=null, totalEqualityDeletes=CounterResult{unit=COUNT, value=0}, manifestsCreated=null, manifestsReplaced=null, manifestsKept=null, manifestEntriesProcessed=null}, metadata={engine-version=3.5.6, app-id=app-20250701123434-0005, engine-name=spark, iceberg-version=Apache Iceberg 1.9.1 (commit f40208ae6fb2f33e578c2637d3dea1db18739f31)}}
25/07/01 12:36:04 INFO SparkWrite: Committed in 559 ms
25/07/01 12:36:04 INFO OverwriteByExpressionExec: Data source write support IcebergBatchWrite(table=default.btc, format=PARQUET) committed.
25/07/01 12:36:04 INFO HiveTableOperations: Committed to table spark_catalog.default.btc with the new metadata location hdfs://100.64.88.101:9000/warehouse/btc/metadata/00000-aef32ecd-6070-4a4d-81d5-367a8b83f876.metadata.json
25/07/01 12:36:04 INFO BaseMetastoreTableOperations: Successfully committed to table spark_catalog.default.btc in 513 ms
Initial data preview:
25/07/01 12:36:04 INFO BaseMetastoreTableOperations: Refreshing table metadata from new version: hdfs://100.64.88.101:9000/warehouse/btc/metadata/00000-aef32ecd-6070-4a4d-81d5-367a8b83f876.metadata.json
25/07/01 12:36:04 INFO BaseMetastoreCatalog: Table loaded by catalog: spark_catalog.default.btc
25/07/01 12:36:05 INFO V2ScanRelationPushDown:
Output: date#8, value#9
25/07/01 12:36:05 INFO SnapshotScan: Scanning table spark_catalog.default.btc snapshot 8457873691194474515 created at 2025-07-01T09:36:04.197+00:00 with filter true
25/07/01 12:36:05 INFO BaseDistributedDataScan: Planning file tasks locally for table spark_catalog.default.btc
25/07/01 12:36:05 INFO SparkPartitioningAwareScan: Reporting UnknownPartitioning with 1 partition(s) for table spark_catalog.default.btc
25/07/01 12:36:05 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 32.0 KiB, free 434.3 MiB)
25/07/01 12:36:05 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 29.4 KiB, free 434.3 MiB)
25/07/01 12:36:05 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 100.64.88.58:40463 (size: 29.4 KiB, free: 434.3 MiB)
25/07/01 12:36:05 INFO SparkContext: Created broadcast 2 from broadcast at SparkBatch.java:85
25/07/01 12:36:05 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 32.0 KiB, free 434.2 MiB)
25/07/01 12:36:05 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 29.4 KiB, free 434.2 MiB)
25/07/01 12:36:05 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 100.64.88.58:40463 (size: 29.4 KiB, free: 434.3 MiB)
25/07/01 12:36:05 INFO SparkContext: Created broadcast 3 from broadcast at SparkBatch.java:85
25/07/01 12:36:05 INFO CodeGenerator: Code generated in 15.072667 ms
25/07/01 12:36:05 INFO SparkContext: Starting job: show at IcebergBranchesDemo.java:39
25/07/01 12:36:05 INFO DAGScheduler: Got job 1 (show at IcebergBranchesDemo.java:39) with 1 output partitions
25/07/01 12:36:05 INFO DAGScheduler: Final stage: ResultStage 1 (show at IcebergBranchesDemo.java:39)
25/07/01 12:36:05 INFO DAGScheduler: Parents of final stage: List()
25/07/01 12:36:05 INFO DAGScheduler: Missing parents: List()
25/07/01 12:36:05 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[5] at show at IcebergBranchesDemo.java:39), which has no missing parents
25/07/01 12:36:05 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 15.6 KiB, free 434.2 MiB)
25/07/01 12:36:05 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 6.4 KiB, free 434.2 MiB)
25/07/01 12:36:05 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on 100.64.88.58:40463 (size: 6.4 KiB, free: 434.3 MiB)
25/07/01 12:36:05 INFO SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1611
25/07/01 12:36:05 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[5] at show at IcebergBranchesDemo.java:39) (first 15 tasks are for partitions Vector(0))
25/07/01 12:36:05 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks resource profile 0
25/07/01 12:36:05 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1) (100.64.88.58, executor 0, partition 0, ANY, 13595 bytes)
25/07/01 12:36:05 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on 100.64.88.58:35945 (size: 6.4 KiB, free: 434.4 MiB)
25/07/01 12:36:05 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 100.64.88.58:35945 (size: 29.4 KiB, free: 434.3 MiB)
25/07/01 12:36:06 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 959 ms on 100.64.88.58 (executor 0) (1/1)
25/07/01 12:36:06 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
25/07/01 12:36:06 INFO DAGScheduler: ResultStage 1 (show at IcebergBranchesDemo.java:39) finished in 0,971 s
25/07/01 12:36:06 INFO DAGScheduler: Job 1 is finished. Cancelling potential speculative or zombie tasks for this job
25/07/01 12:36:06 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage finished
25/07/01 12:36:06 INFO DAGScheduler: Job 1 finished: show at IcebergBranchesDemo.java:39, took 0,974619 s
25/07/01 12:36:06 INFO BlockManagerInfo: Removed broadcast_4_piece0 on 100.64.88.58:40463 in memory (size: 6.4 KiB, free: 434.3 MiB)
25/07/01 12:36:06 INFO BlockManagerInfo: Removed broadcast_4_piece0 on 100.64.88.58:35945 in memory (size: 6.4 KiB, free: 434.3 MiB)
25/07/01 12:36:06 INFO CodeGenerator: Code generated in 8.93971 ms
+--------------------+---------+
| date| value|
+--------------------+---------+
|2024-08-26 10:56:...|5858493.0|
|2024-08-26 11:05:...|5847901.0|
|2024-08-27 00:20:...|5786144.0|
|2024-08-27 14:40:...|5665017.0|
|2024-08-28 05:00:...|5426125.0|
|2024-08-28 19:20:...|5424955.0|
|2024-08-29 09:40:...|5477816.0|
|2024-08-30 00:00:...|5429526.0|
|2024-08-30 14:20:...|5372384.0|
|2024-08-26 11:06:...|5847901.0|
+--------------------+---------+
25/07/01 12:36:07 INFO HiveTableOperations: Committed to table spark_catalog.default.btc with the new metadata location hdfs://100.64.88.101:9000/warehouse/btc/metadata/00001-5023ce26-02bb-4f79-9213-872422b04d33.metadata.json
25/07/01 12:36:07 INFO BaseMetastoreTableOperations: Successfully committed to table spark_catalog.default.btc in 503 ms
Branch 'test' created
25/07/01 12:36:07 INFO BaseMetastoreTableOperations: Refreshing table metadata from new version: hdfs://100.64.88.101:9000/warehouse/btc/metadata/00001-5023ce26-02bb-4f79-9213-872422b04d33.metadata.json
25/07/01 12:36:07 INFO CodeGenerator: Code generated in 6.300417 ms
25/07/01 12:36:07 INFO MemoryStore: Block broadcast_5 stored as values in memory (estimated size 32.0 KiB, free 434.2 MiB)
25/07/01 12:36:07 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 29.4 KiB, free 434.1 MiB)
25/07/01 12:36:07 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on 100.64.88.58:40463 (size: 29.4 KiB, free: 434.3 MiB)
25/07/01 12:36:07 INFO BlockManagerInfo: Removed broadcast_2_piece0 on 100.64.88.58:40463 in memory (size: 29.4 KiB, free: 434.3 MiB)
25/07/01 12:36:07 INFO SparkContext: Created broadcast 5 from broadcast at SparkWrite.java:193
25/07/01 12:36:07 INFO AppendDataExec: Start processing data source write support: IcebergBatchWrite(table=spark_catalog.default.btc, format=PARQUET). The input RDD has 1 partitions.
25/07/01 12:36:07 INFO SparkContext: Starting job: append at IcebergBranchesDemo.java:52
25/07/01 12:36:07 INFO DAGScheduler: Got job 2 (append at IcebergBranchesDemo.java:52) with 1 output partitions
25/07/01 12:36:07 INFO DAGScheduler: Final stage: ResultStage 2 (append at IcebergBranchesDemo.java:52)
25/07/01 12:36:07 INFO DAGScheduler: Parents of final stage: List()
25/07/01 12:36:07 INFO DAGScheduler: Missing parents: List()
25/07/01 12:36:07 INFO BlockManagerInfo: Removed broadcast_3_piece0 on 100.64.88.58:40463 in memory (size: 29.4 KiB, free: 434.3 MiB)
25/07/01 12:36:07 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[7] at append at IcebergBranchesDemo.java:52), which has no missing parents
25/07/01 12:36:07 INFO BlockManagerInfo: Removed broadcast_3_piece0 on 100.64.88.58:35945 in memory (size: 29.4 KiB, free: 434.4 MiB)
25/07/01 12:36:07 INFO MemoryStore: Block broadcast_6 stored as values in memory (estimated size 8.0 KiB, free 434.3 MiB)
25/07/01 12:36:07 INFO MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 4.5 KiB, free 434.2 MiB)
25/07/01 12:36:07 INFO BlockManagerInfo: Added broadcast_6_piece0 in memory on 100.64.88.58:40463 (size: 4.5 KiB, free: 434.3 MiB)
25/07/01 12:36:07 INFO SparkContext: Created broadcast 6 from broadcast at DAGScheduler.scala:1611
25/07/01 12:36:07 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[7] at append at IcebergBranchesDemo.java:52) (first 15 tasks are for partitions Vector(0))
25/07/01 12:36:07 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks resource profile 0
25/07/01 12:36:07 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2) (100.64.88.58, executor 0, partition 0, PROCESS_LOCAL, 9376 bytes)
25/07/01 12:36:07 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 100.64.88.58:40463 in memory (size: 29.0 KiB, free: 434.4 MiB)
25/07/01 12:36:07 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 100.64.88.58:35945 in memory (size: 29.0 KiB, free: 434.4 MiB)
25/07/01 12:36:07 INFO BlockManagerInfo: Added broadcast_6_piece0 in memory on 100.64.88.58:35945 (size: 4.5 KiB, free: 434.4 MiB)
25/07/01 12:36:07 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 100.64.88.58:40463 in memory (size: 7.0 KiB, free: 434.4 MiB)
25/07/01 12:36:07 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 100.64.88.58:35945 in memory (size: 7.0 KiB, free: 434.4 MiB)
25/07/01 12:36:07 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on 100.64.88.58:35945 (size: 29.4 KiB, free: 434.4 MiB)
25/07/01 12:36:08 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 440 ms on 100.64.88.58 (executor 0) (1/1)
25/07/01 12:36:08 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
25/07/01 12:36:08 INFO DAGScheduler: ResultStage 2 (append at IcebergBranchesDemo.java:52) finished in 0,449 s
25/07/01 12:36:08 INFO DAGScheduler: Job 2 is finished. Cancelling potential speculative or zombie tasks for this job
25/07/01 12:36:08 INFO TaskSchedulerImpl: Killing all running tasks in stage 2: Stage finished
25/07/01 12:36:08 INFO DAGScheduler: Job 2 finished: append at IcebergBranchesDemo.java:52, took 0,452168 s
25/07/01 12:36:08 INFO AppendDataExec: Data source write support IcebergBatchWrite(table=spark_catalog.default.btc, format=PARQUET) is committing.
25/07/01 12:36:08 INFO SparkWrite: Committing append with 1 new data files to table spark_catalog.default.btc
25/07/01 12:36:08 INFO HiveTableOperations: Committed to table spark_catalog.default.btc with the new metadata location hdfs://100.64.88.101:9000/warehouse/btc/metadata/00002-a74144f6-4ac9-4935-85ee-b8eb1c2df711.metadata.json
25/07/01 12:36:08 INFO BaseMetastoreTableOperations: Successfully committed to table spark_catalog.default.btc in 382 ms
25/07/01 12:36:08 INFO SnapshotProducer: Committed snapshot 4811987959892666483 (MergeAppend)
25/07/01 12:36:08 INFO BaseMetastoreTableOperations: Refreshing table metadata from new version: hdfs://100.64.88.101:9000/warehouse/btc/metadata/00002-a74144f6-4ac9-4935-85ee-b8eb1c2df711.metadata.json
25/07/01 12:36:08 INFO LoggingMetricsReporter: Received metrics report: CommitReport{tableName=spark_catalog.default.btc, snapshotId=4811987959892666483, sequenceNumber=2, operation=append, commitMetrics=CommitMetricsResult{totalDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT0.62190753S, count=1}, attempts=CounterResult{unit=COUNT, value=1}, addedDataFiles=CounterResult{unit=COUNT, value=1}, removedDataFiles=null, totalDataFiles=CounterResult{unit=COUNT, value=2}, addedDeleteFiles=null, addedEqualityDeleteFiles=null, addedPositionalDeleteFiles=null, addedDVs=null, removedDeleteFiles=null, removedEqualityDeleteFiles=null, removedPositionalDeleteFiles=null, removedDVs=null, totalDeleteFiles=CounterResult{unit=COUNT, value=0}, addedRecords=CounterResult{unit=COUNT, value=1}, removedRecords=null, totalRecords=CounterResult{unit=COUNT, value=409043}, addedFilesSizeInBytes=CounterResult{unit=BYTES, value=718}, removedFilesSizeInBytes=null, totalFilesSizeInBytes=CounterResult{unit=BYTES, value=3163356}, addedPositionalDeletes=null, removedPositionalDeletes=null, totalPositionalDeletes=CounterResult{unit=COUNT, value=0}, addedEqualityDeletes=null, removedEqualityDeletes=null, totalEqualityDeletes=CounterResult{unit=COUNT, value=0}, manifestsCreated=null, manifestsReplaced=null, manifestsKept=null, manifestEntriesProcessed=null}, metadata={engine-version=3.5.6, app-id=app-20250701123434-0005, engine-name=spark, iceberg-version=Apache Iceberg 1.9.1 (commit f40208ae6fb2f33e578c2637d3dea1db18739f31)}}
25/07/01 12:36:08 INFO SparkWrite: Committed in 690 ms
25/07/01 12:36:08 INFO AppendDataExec: Data source write support IcebergBatchWrite(table=spark_catalog.default.btc, format=PARQUET) committed.
Data written to branch 'test'
Row count in test branch:
25/07/01 12:36:08 INFO SnapshotScan: Scanning table spark_catalog.default.btc snapshot 4811987959892666483 created at 2025-07-01T09:36:08.300+00:00 with filter true
25/07/01 12:36:08 INFO BaseDistributedDataScan: Planning file tasks locally for table spark_catalog.default.btc
25/07/01 12:36:08 INFO V2ScanRelationPushDown:
Pushing operators to spark_catalog.default.btc
Pushed Aggregate Functions:
COUNT(*)
Pushed Group by:
25/07/01 12:36:08 INFO CodeGenerator: Code generated in 12.532958 ms
25/07/01 12:36:09 INFO CodeGenerator: Code generated in 8.534771 ms
25/07/01 12:36:09 INFO DAGScheduler: Registering RDD 10 (show at IcebergBranchesDemo.java:57) as input to shuffle 0
25/07/01 12:36:09 INFO DAGScheduler: Got map stage job 3 (show at IcebergBranchesDemo.java:57) with 1 output partitions
25/07/01 12:36:09 INFO DAGScheduler: Final stage: ShuffleMapStage 3 (show at IcebergBranchesDemo.java:57)
25/07/01 12:36:09 INFO DAGScheduler: Parents of final stage: List()
25/07/01 12:36:09 INFO DAGScheduler: Missing parents: List()
25/07/01 12:36:09 INFO DAGScheduler: Submitting ShuffleMapStage 3 (MapPartitionsRDD[10] at show at IcebergBranchesDemo.java:57), which has no missing parents
25/07/01 12:36:09 INFO MemoryStore: Block broadcast_7 stored as values in memory (estimated size 12.6 KiB, free 434.3 MiB)
25/07/01 12:36:09 INFO MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 6.2 KiB, free 434.3 MiB)
25/07/01 12:36:09 INFO BlockManagerInfo: Added broadcast_7_piece0 in memory on 100.64.88.58:40463 (size: 6.2 KiB, free: 434.4 MiB)
25/07/01 12:36:09 INFO SparkContext: Created broadcast 7 from broadcast at DAGScheduler.scala:1611
25/07/01 12:36:09 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 3 (MapPartitionsRDD[10] at show at IcebergBranchesDemo.java:57) (first 15 tasks are for partitions Vector(0))
25/07/01 12:36:09 INFO TaskSchedulerImpl: Adding task set 3.0 with 1 tasks resource profile 0
25/07/01 12:36:09 INFO TaskSetManager: Starting task 0.0 in stage 3.0 (TID 3) (100.64.88.58, executor 0, partition 0, PROCESS_LOCAL, 9357 bytes)
25/07/01 12:36:09 INFO BlockManagerInfo: Added broadcast_7_piece0 in memory on 100.64.88.58:35945 (size: 6.2 KiB, free: 434.4 MiB)
25/07/01 12:36:09 INFO TaskSetManager: Finished task 0.0 in stage 3.0 (TID 3) in 113 ms on 100.64.88.58 (executor 0) (1/1)
25/07/01 12:36:09 INFO TaskSchedulerImpl: Removed TaskSet 3.0, whose tasks have all completed, from pool
25/07/01 12:36:09 INFO DAGScheduler: ShuffleMapStage 3 (show at IcebergBranchesDemo.java:57) finished in 0,131 s
25/07/01 12:36:09 INFO DAGScheduler: looking for newly runnable stages
25/07/01 12:36:09 INFO DAGScheduler: running: Set()
25/07/01 12:36:09 INFO DAGScheduler: waiting: Set()
25/07/01 12:36:09 INFO DAGScheduler: failed: Set()
25/07/01 12:36:09 INFO CodeGenerator: Code generated in 9.178572 ms
25/07/01 12:36:09 INFO SparkContext: Starting job: show at IcebergBranchesDemo.java:57
25/07/01 12:36:09 INFO DAGScheduler: Got job 4 (show at IcebergBranchesDemo.java:57) with 1 output partitions
25/07/01 12:36:09 INFO DAGScheduler: Final stage: ResultStage 5 (show at IcebergBranchesDemo.java:57)
25/07/01 12:36:09 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 4)
25/07/01 12:36:09 INFO DAGScheduler: Missing parents: List()
25/07/01 12:36:09 INFO DAGScheduler: Submitting ResultStage 5 (MapPartitionsRDD[13] at show at IcebergBranchesDemo.java:57), which has no missing parents
25/07/01 12:36:09 INFO MemoryStore: Block broadcast_8 stored as values in memory (estimated size 13.8 KiB, free 434.3 MiB)
25/07/01 12:36:09 INFO MemoryStore: Block broadcast_8_piece0 stored as bytes in memory (estimated size 6.4 KiB, free 434.3 MiB)
25/07/01 12:36:09 INFO BlockManagerInfo: Added broadcast_8_piece0 in memory on 100.64.88.58:40463 (size: 6.4 KiB, free: 434.4 MiB)
25/07/01 12:36:09 INFO SparkContext: Created broadcast 8 from broadcast at DAGScheduler.scala:1611
25/07/01 12:36:09 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 5 (MapPartitionsRDD[13] at show at IcebergBranchesDemo.java:57) (first 15 tasks are for partitions Vector(0))
25/07/01 12:36:09 INFO TaskSchedulerImpl: Adding task set 5.0 with 1 tasks resource profile 0
25/07/01 12:36:09 INFO TaskSetManager: Starting task 0.0 in stage 5.0 (TID 4) (100.64.88.58, executor 0, partition 0, NODE_LOCAL, 9196 bytes)
25/07/01 12:36:09 INFO BlockManagerInfo: Added broadcast_8_piece0 in memory on 100.64.88.58:35945 (size: 6.4 KiB, free: 434.4 MiB)
25/07/01 12:36:09 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 100.64.88.58:53648
25/07/01 12:36:09 INFO TaskSetManager: Finished task 0.0 in stage 5.0 (TID 4) in 88 ms on 100.64.88.58 (executor 0) (1/1)
25/07/01 12:36:09 INFO TaskSchedulerImpl: Removed TaskSet 5.0, whose tasks have all completed, from pool
25/07/01 12:36:09 INFO DAGScheduler: ResultStage 5 (show at IcebergBranchesDemo.java:57) finished in 0,093 s
25/07/01 12:36:09 INFO DAGScheduler: Job 4 is finished. Cancelling potential speculative or zombie tasks for this job
25/07/01 12:36:09 INFO TaskSchedulerImpl: Killing all running tasks in stage 5: Stage finished
25/07/01 12:36:09 INFO DAGScheduler: Job 4 finished: show at IcebergBranchesDemo.java:57, took 0,099330 s
25/07/01 12:36:09 INFO CodeGenerator: Code generated in 7.345831 ms
+--------+
|count(1)|
+--------+
| 409043|
+--------+
Row count in main branch:
25/07/01 12:36:09 INFO SnapshotScan: Scanning table spark_catalog.default.btc snapshot 8457873691194474515 created at 2025-07-01T09:36:04.197+00:00 with filter true
25/07/01 12:36:09 INFO BaseDistributedDataScan: Planning file tasks locally for table spark_catalog.default.btc
25/07/01 12:36:09 INFO V2ScanRelationPushDown:
Pushing operators to spark_catalog.default.btc
Pushed Aggregate Functions:
COUNT(*)
Pushed Group by:
25/07/01 12:36:09 INFO DAGScheduler: Registering RDD 16 (show at IcebergBranchesDemo.java:60) as input to shuffle 1
25/07/01 12:36:09 INFO DAGScheduler: Got map stage job 5 (show at IcebergBranchesDemo.java:60) with 1 output partitions
25/07/01 12:36:09 INFO DAGScheduler: Final stage: ShuffleMapStage 6 (show at IcebergBranchesDemo.java:60)
25/07/01 12:36:09 INFO DAGScheduler: Parents of final stage: List()
25/07/01 12:36:09 INFO DAGScheduler: Missing parents: List()
25/07/01 12:36:09 INFO DAGScheduler: Submitting ShuffleMapStage 6 (MapPartitionsRDD[16] at show at IcebergBranchesDemo.java:60), which has no missing parents
25/07/01 12:36:09 INFO MemoryStore: Block broadcast_9 stored as values in memory (estimated size 12.6 KiB, free 434.3 MiB)
25/07/01 12:36:09 INFO MemoryStore: Block broadcast_9_piece0 stored as bytes in memory (estimated size 6.2 KiB, free 434.3 MiB)
25/07/01 12:36:09 INFO BlockManagerInfo: Added broadcast_9_piece0 in memory on 100.64.88.58:40463 (size: 6.2 KiB, free: 434.3 MiB)
25/07/01 12:36:09 INFO SparkContext: Created broadcast 9 from broadcast at DAGScheduler.scala:1611
25/07/01 12:36:09 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 6 (MapPartitionsRDD[16] at show at IcebergBranchesDemo.java:60) (first 15 tasks are for partitions Vector(0))
25/07/01 12:36:09 INFO TaskSchedulerImpl: Adding task set 6.0 with 1 tasks resource profile 0
25/07/01 12:36:09 INFO TaskSetManager: Starting task 0.0 in stage 6.0 (TID 5) (100.64.88.58, executor 0, partition 0, PROCESS_LOCAL, 9357 bytes)
25/07/01 12:36:09 INFO BlockManagerInfo: Added broadcast_9_piece0 in memory on 100.64.88.58:35945 (size: 6.2 KiB, free: 434.3 MiB)
25/07/01 12:36:09 INFO TaskSetManager: Finished task 0.0 in stage 6.0 (TID 5) in 21 ms on 100.64.88.58 (executor 0) (1/1)
25/07/01 12:36:09 INFO TaskSchedulerImpl: Removed TaskSet 6.0, whose tasks have all completed, from pool
25/07/01 12:36:09 INFO DAGScheduler: ShuffleMapStage 6 (show at IcebergBranchesDemo.java:60) finished in 0,026 s
25/07/01 12:36:09 INFO DAGScheduler: looking for newly runnable stages
25/07/01 12:36:09 INFO DAGScheduler: running: Set()
25/07/01 12:36:09 INFO DAGScheduler: waiting: Set()
25/07/01 12:36:09 INFO DAGScheduler: failed: Set()
25/07/01 12:36:09 INFO SparkContext: Starting job: show at IcebergBranchesDemo.java:60
25/07/01 12:36:09 INFO DAGScheduler: Got job 6 (show at IcebergBranchesDemo.java:60) with 1 output partitions
25/07/01 12:36:09 INFO DAGScheduler: Final stage: ResultStage 8 (show at IcebergBranchesDemo.java:60)
25/07/01 12:36:09 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 7)
25/07/01 12:36:09 INFO DAGScheduler: Missing parents: List()
25/07/01 12:36:09 INFO DAGScheduler: Submitting ResultStage 8 (MapPartitionsRDD[19] at show at IcebergBranchesDemo.java:60), which has no missing parents
25/07/01 12:36:09 INFO MemoryStore: Block broadcast_10 stored as values in memory (estimated size 13.8 KiB, free 434.3 MiB)
25/07/01 12:36:09 INFO MemoryStore: Block broadcast_10_piece0 stored as bytes in memory (estimated size 6.4 KiB, free 434.3 MiB)
25/07/01 12:36:09 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on 100.64.88.58:40463 (size: 6.4 KiB, free: 434.3 MiB)
25/07/01 12:36:09 INFO SparkContext: Created broadcast 10 from broadcast at DAGScheduler.scala:1611
25/07/01 12:36:09 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 8 (MapPartitionsRDD[19] at show at IcebergBranchesDemo.java:60) (first 15 tasks are for partitions Vector(0))
25/07/01 12:36:09 INFO TaskSchedulerImpl: Adding task set 8.0 with 1 tasks resource profile 0
25/07/01 12:36:09 INFO TaskSetManager: Starting task 0.0 in stage 8.0 (TID 6) (100.64.88.58, executor 0, partition 0, NODE_LOCAL, 9196 bytes)
25/07/01 12:36:09 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on 100.64.88.58:35945 (size: 6.4 KiB, free: 434.3 MiB)
25/07/01 12:36:09 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 1 to 100.64.88.58:53648
25/07/01 12:36:09 INFO TaskSetManager: Finished task 0.0 in stage 8.0 (TID 6) in 29 ms on 100.64.88.58 (executor 0) (1/1)
25/07/01 12:36:09 INFO TaskSchedulerImpl: Removed TaskSet 8.0, whose tasks have all completed, from pool
25/07/01 12:36:09 INFO DAGScheduler: ResultStage 8 (show at IcebergBranchesDemo.java:60) finished in 0,034 s
25/07/01 12:36:09 INFO DAGScheduler: Job 6 is finished. Cancelling potential speculative or zombie tasks for this job
25/07/01 12:36:09 INFO TaskSchedulerImpl: Killing all running tasks in stage 8: Stage finished
25/07/01 12:36:09 INFO DAGScheduler: Job 6 finished: show at IcebergBranchesDemo.java:60, took 0,036402 s
+--------+
|count(1)|
+--------+
| 409042|
+--------+
25/07/01 12:36:10 INFO HiveTableOperations: Committed to table spark_catalog.default.btc with the new metadata location hdfs://100.64.88.101:9000/warehouse/btc/metadata/00003-5ebc7070-f4ae-4113-8534-f89465ecf9c2.metadata.json
25/07/01 12:36:10 INFO BaseMetastoreTableOperations: Successfully committed to table spark_catalog.default.btc in 367 ms
25/07/01 12:36:10 INFO BaseMetastoreTableOperations: Refreshing table metadata from new version: hdfs://100.64.88.101:9000/warehouse/btc/metadata/00003-5ebc7070-f4ae-4113-8534-f89465ecf9c2.metadata.json
Branches merged with fast_forward
Row count after merge:
25/07/01 12:36:10 INFO SnapshotScan: Scanning table spark_catalog.default.btc snapshot 4811987959892666483 created at 2025-07-01T09:36:08.300+00:00 with filter true
25/07/01 12:36:10 INFO BaseDistributedDataScan: Planning file tasks locally for table spark_catalog.default.btc
25/07/01 12:36:10 INFO V2ScanRelationPushDown:
Pushing operators to spark_catalog.default.btc
Pushed Aggregate Functions:
COUNT(*)
Pushed Group by:
25/07/01 12:36:10 INFO DAGScheduler: Registering RDD 22 (show at IcebergBranchesDemo.java:68) as input to shuffle 2
25/07/01 12:36:10 INFO DAGScheduler: Got map stage job 7 (show at IcebergBranchesDemo.java:68) with 1 output partitions
25/07/01 12:36:10 INFO DAGScheduler: Final stage: ShuffleMapStage 9 (show at IcebergBranchesDemo.java:68)
25/07/01 12:36:10 INFO DAGScheduler: Parents of final stage: List()
25/07/01 12:36:10 INFO DAGScheduler: Missing parents: List()
25/07/01 12:36:10 INFO DAGScheduler: Submitting ShuffleMapStage 9 (MapPartitionsRDD[22] at show at IcebergBranchesDemo.java:68), which has no missing parents
25/07/01 12:36:10 INFO MemoryStore: Block broadcast_11 stored as values in memory (estimated size 12.6 KiB, free 434.2 MiB)
25/07/01 12:36:10 INFO MemoryStore: Block broadcast_11_piece0 stored as bytes in memory (estimated size 6.2 KiB, free 434.2 MiB)
25/07/01 12:36:10 INFO BlockManagerInfo: Added broadcast_11_piece0 in memory on 100.64.88.58:40463 (size: 6.2 KiB, free: 434.3 MiB)
25/07/01 12:36:10 INFO SparkContext: Created broadcast 11 from broadcast at DAGScheduler.scala:1611
25/07/01 12:36:10 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 9 (MapPartitionsRDD[22] at show at IcebergBranchesDemo.java:68) (first 15 tasks are for partitions Vector(0))
25/07/01 12:36:10 INFO TaskSchedulerImpl: Adding task set 9.0 with 1 tasks resource profile 0
25/07/01 12:36:10 INFO TaskSetManager: Starting task 0.0 in stage 9.0 (TID 7) (100.64.88.58, executor 0, partition 0, PROCESS_LOCAL, 9357 bytes)
25/07/01 12:36:10 INFO BlockManagerInfo: Added broadcast_11_piece0 in memory on 100.64.88.58:35945 (size: 6.2 KiB, free: 434.3 MiB)
25/07/01 12:36:10 INFO TaskSetManager: Finished task 0.0 in stage 9.0 (TID 7) in 16 ms on 100.64.88.58 (executor 0) (1/1)
25/07/01 12:36:10 INFO TaskSchedulerImpl: Removed TaskSet 9.0, whose tasks have all completed, from pool
25/07/01 12:36:10 INFO DAGScheduler: ShuffleMapStage 9 (show at IcebergBranchesDemo.java:68) finished in 0,021 s
25/07/01 12:36:10 INFO DAGScheduler: looking for newly runnable stages
25/07/01 12:36:10 INFO DAGScheduler: running: Set()
25/07/01 12:36:10 INFO DAGScheduler: waiting: Set()
25/07/01 12:36:10 INFO DAGScheduler: failed: Set()
25/07/01 12:36:10 INFO SparkContext: Starting job: show at IcebergBranchesDemo.java:68
25/07/01 12:36:10 INFO DAGScheduler: Got job 8 (show at IcebergBranchesDemo.java:68) with 1 output partitions
25/07/01 12:36:10 INFO DAGScheduler: Final stage: ResultStage 11 (show at IcebergBranchesDemo.java:68)
25/07/01 12:36:10 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 10)
25/07/01 12:36:10 INFO DAGScheduler: Missing parents: List()
25/07/01 12:36:10 INFO DAGScheduler: Submitting ResultStage 11 (MapPartitionsRDD[25] at show at IcebergBranchesDemo.java:68), which has no missing parents
25/07/01 12:36:10 INFO MemoryStore: Block broadcast_12 stored as values in memory (estimated size 13.8 KiB, free 434.2 MiB)
25/07/01 12:36:10 INFO MemoryStore: Block broadcast_12_piece0 stored as bytes in memory (estimated size 6.4 KiB, free 434.2 MiB)
25/07/01 12:36:10 INFO BlockManagerInfo: Added broadcast_12_piece0 in memory on 100.64.88.58:40463 (size: 6.4 KiB, free: 434.3 MiB)
25/07/01 12:36:10 INFO SparkContext: Created broadcast 12 from broadcast at DAGScheduler.scala:1611
25/07/01 12:36:10 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 11 (MapPartitionsRDD[25] at show at IcebergBranchesDemo.java:68) (first 15 tasks are for partitions Vector(0))
25/07/01 12:36:10 INFO TaskSchedulerImpl: Adding task set 11.0 with 1 tasks resource profile 0
25/07/01 12:36:10 INFO TaskSetManager: Starting task 0.0 in stage 11.0 (TID 8) (100.64.88.58, executor 0, partition 0, NODE_LOCAL, 9196 bytes)
25/07/01 12:36:10 INFO BlockManagerInfo: Added broadcast_12_piece0 in memory on 100.64.88.58:35945 (size: 6.4 KiB, free: 434.3 MiB)
25/07/01 12:36:10 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 2 to 100.64.88.58:53648
25/07/01 12:36:10 INFO TaskSetManager: Finished task 0.0 in stage 11.0 (TID 8) in 26 ms on 100.64.88.58 (executor 0) (1/1)
25/07/01 12:36:10 INFO TaskSchedulerImpl: Removed TaskSet 11.0, whose tasks have all completed, from pool
25/07/01 12:36:10 INFO DAGScheduler: ResultStage 11 (show at IcebergBranchesDemo.java:68) finished in 0,030 s
25/07/01 12:36:10 INFO DAGScheduler: Job 8 is finished. Cancelling potential speculative or zombie tasks for this job
25/07/01 12:36:10 INFO TaskSchedulerImpl: Killing all running tasks in stage 11: Stage finished
25/07/01 12:36:10 INFO DAGScheduler: Job 8 finished: show at IcebergBranchesDemo.java:68, took 0,032356 s
+--------+
|count(1)|
+--------+
| 409043|
+--------+
25/07/01 12:36:10 INFO HiveTableOperations: Committed to table spark_catalog.default.btc with the new metadata location hdfs://100.64.88.101:9000/warehouse/btc/metadata/00004-d492a7d7-849f-4790-9040-a428c9afb3f5.metadata.json
25/07/01 12:36:10 INFO BaseMetastoreTableOperations: Successfully committed to table spark_catalog.default.btc in 328 ms
Branch 'test' dropped
25/07/01 12:36:10 INFO SparkContext: SparkContext is stopping with exitCode 0.
25/07/01 12:36:10 INFO SparkUI: Stopped Spark web UI at http://100.64.88.58:4040
25/07/01 12:36:10 INFO StandaloneSchedulerBackend: Shutting down all executors
25/07/01 12:36:10 INFO StandaloneSchedulerBackend$StandaloneDriverEndpoint: Asking each executor to shut down
25/07/01 12:36:10 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
25/07/01 12:36:10 INFO MemoryStore: MemoryStore cleared
25/07/01 12:36:10 INFO BlockManager: BlockManager stopped
25/07/01 12:36:10 INFO BlockManagerMaster: BlockManagerMaster stopped
25/07/01 12:36:10 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
25/07/01 12:36:10 INFO SparkContext: Successfully stopped SparkContext
25/07/01 12:36:10 INFO ShutdownHookManager: Shutdown hook called
25/07/01 12:36:10 INFO ShutdownHookManager: Deleting directory /tmp/spark-9eab5ba7-4a6a-49c1-ade0-bc54356ef3f3
25/07/01 12:36:10 INFO ShutdownHookManager: Deleting directory /tmp/spark-618ab58d-f87c-47c0-8df5-59f157869c44