ARWorldTrackingConfiguration¶
Inherits: RefCounted < Object
Configuration for world-tracking AR sessions.
Description¶
Configures a world-tracking session for 6DoF device tracking plus optional features such as plane detection, scene reconstruction, image tracking, collaboration, and frame semantics. Use this configuration with ARSession.run(). Availability of individual features varies by platform and device.
Properties¶
bool |
|
|
bool |
|
|
String |
|
|
int |
|
|
int |
|
|
bool |
|
|
bool |
|
|
bool |
|
|
bool |
|
|
int |
|
|
int |
|
|
bool |
|
|
int |
|
|
bool |
|
|
bool |
|
|
int |
|
Methods¶
bool |
is_supported() static |
bool |
set_detection_image_group(groupName: String) |
bool |
supports_app_clip_code_tracking() static |
bool |
supports_scene_reconstruction(scene_reconstruction: SceneReconstruction) static |
bool |
supports_user_face_tracking() static |
Enumerations¶
enum WorldAlignment: 🔗
WorldAlignment GRAVITY = 0
Align the world so the Y axis matches gravity.
WorldAlignment GRAVITY_AND_HEADING = 1
Align the world to gravity and the device heading.
WorldAlignment CAMERA = 2
Align the world relative to the camera orientation at session start.
enum EnvironmentTexturing: 🔗
EnvironmentTexturing NONE = 0
Disable environment texturing.
EnvironmentTexturing MANUAL = 1
Use manually provided environment probes only.
EnvironmentTexturing AUTOMATIC = 2
Allow ARKit to generate environment probes automatically.
enum PlaneDetection: 🔗
PlaneDetection HORIZONTAL = 1
Detect horizontal planes.
PlaneDetection VERTICAL = 2
Detect vertical planes.
PlaneDetection SLANTED = 4
Detect slanted planes when supported.
enum SceneReconstruction: 🔗
SceneReconstruction MESH = 1
Reconstruct scene geometry as a mesh.
SceneReconstruction MESH_WITH_CLASSIFICATION = 3
Reconstruct scene geometry and classify mesh faces.
enum FrameSemantics: 🔗
FrameSemantics PERSON_SEGMENTATION = 1
Enable person segmentation mattes.
FrameSemantics PERSON_SEGMENTATION_WITH_DEPTH = 2
Enable person segmentation mattes with estimated depth.
FrameSemantics BODY_DETECTION = 4
Enable body detection semantics.
Property Descriptions¶
bool app_clip_code_tracking_enabled = false 🔗
void set_app_clip_code_tracking_enabled(value: bool)
bool get_app_clip_code_tracking_enabled()
When enabled, the session attempts to detect App Clip codes.
bool automatic_image_scale_estimation_enabled = false 🔗
void set_automatic_image_scale_estimation_enabled(value: bool)
bool get_automatic_image_scale_estimation_enabled()
When enabled, ARKit refines the physical scale estimate of tracked reference images.
String detection_image_group_name = "" 🔗
void set_detection_image_group_name(value: String)
String get_detection_image_group_name()
The asset-catalog image group used for reference-image detection.
int environment_texturing = 0 🔗
void set_environment_texturing(value: int)
int get_environment_texturing()
Controls how environment textures are generated. See EnvironmentTexturing.
int frame_semantics_mask = 0 🔗
void set_frame_semantics_mask(value: int)
int get_frame_semantics_mask()
Bitmask of FrameSemantics values enabled for the session.
bool hand_tracking_enabled = false 🔗
void set_hand_tracking_enabled(value: bool)
bool get_hand_tracking_enabled()
When enabled, visionOS hand-tracking data is included when supported.
bool is_auto_focus_enabled = true 🔗
void set_is_auto_focus_enabled(value: bool)
bool get_is_auto_focus_enabled()
When enabled, the camera may adjust focus automatically during tracking.
bool is_collaboration_enabled = false 🔗
void set_is_collaboration_enabled(value: bool)
bool get_is_collaboration_enabled()
When enabled, the session produces collaboration data for multi-user AR.
bool is_light_estimation_enabled = true 🔗
void set_is_light_estimation_enabled(value: bool)
bool get_is_light_estimation_enabled()
When enabled, ARKit produces ARLightEstimate data for each frame.
int maximum_number_of_tracked_images = 0 🔗
void set_maximum_number_of_tracked_images(value: int)
int get_maximum_number_of_tracked_images()
Maximum number of reference images to track simultaneously.
int plane_detection_mask = 0 🔗
void set_plane_detection_mask(value: int)
int get_plane_detection_mask()
Bitmask of PlaneDetection values that should be detected.
bool provides_audio_data = false 🔗
void set_provides_audio_data(value: bool)
bool get_provides_audio_data()
When enabled on supported platforms, captured audio data is included with the session.
int scene_reconstruction = 0 🔗
void set_scene_reconstruction(value: int)
int get_scene_reconstruction()
The requested scene reconstruction mode. See SceneReconstruction.
bool user_face_tracking_enabled = false 🔗
void set_user_face_tracking_enabled(value: bool)
bool get_user_face_tracking_enabled()
When enabled on supported iOS devices, ARKit also tracks the user’s face during world tracking.
bool wants_hdr_environment_textures = false 🔗
void set_wants_hdr_environment_textures(value: bool)
bool get_wants_hdr_environment_textures()
Requests HDR environment textures when environment texturing is enabled.
int world_alignment = 0 🔗
void set_world_alignment(value: int)
int get_world_alignment()
Defines how the world coordinate system is aligned. See WorldAlignment.
Method Descriptions¶
bool is_supported() static 🔗
Returns true when the current device supports world tracking.
bool set_detection_image_group(groupName: String) 🔗
Loads a reference image group from the asset catalog and assigns it to detection_image_group_name. Returns true on success.
bool supports_app_clip_code_tracking() static 🔗
Returns true when App Clip code tracking is supported on the current device.
bool supports_scene_reconstruction(scene_reconstruction: SceneReconstruction) static 🔗
Returns true when the requested scene reconstruction mode is supported.
bool supports_user_face_tracking() static 🔗
Returns true when simultaneous world and user face tracking is supported.