DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-20 are currently pending in the present application, with claims 1, 9, and 17 being independent.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3, 5-7, 9-11, 13-15, 17, and 19-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated Apple Developer, "ARKit | Apple Developer Documentation,” ARKit, archived April 30, 2022, at the Wayback Machine, https://web.archive.org/web/20220430100339/https://developer.apple.com/documentation/arkit/
Regarding claim 1, Apple Developer discloses a method comprising:
receiving a request for an anchor associated with a location (See ARKit Webpage, Section Setup -> link class ARSession, Section Responding to Events; protocol ARSessionDelegate //Methods you can implement to receive captured video frame images and tracking state from an AR session. See ARKit Webpage, Section Virtual Content -> link Content Anchors; Anchors identify the position of items in your augmented reality session… See ARKit Webpage, Section Setup -> link class ARSession, Section Managing Anchors; func add(anchor: ARAnchor) //Adds the specified anchor to be tracked by the session. See ARKit Webpage, Section Setup -> link Configuration Objects -> link ARWorldTrackConfiguration; Find the 3D positions of real-world features that correspond to a touch point on the device's screen with ray casting. Examiner's note: ARAnchor is the set of anchors recorded in the world map)
determining whether the location includes a structure (See ARKit Webpage, Section Setup -> link Configuration Objects, Section Spatial Tracking -> link ARWorldTrackConfiguration, Section Tracking Surfaces; var planeDetection: ARWorldTrackingConfiguration.PlaneDetection //A value that specifies whether and how the session automatically attempts to detect flat surfaces in the camera-captured image…Section Detecting 3D Objects; var detectionObjects: Set<ARReferenceObject> //A set of 3D objects that the framework attempts to detect in the user’s environment. Examiner's note: ARPlaneAnchor and ARObjectAnchor are used to determine structure at a location, both considered structure),
in response to determining the location includes a structure, retrieving data associated with the structure (See ARKit Webpage, Section Setup -> link Configuration Objects, Section Spatial Tracking -> link ARWorldTrackingConfiguration; Find real-world horizontal or vertical surfaces with planeDetection. Add the surfaces to the session as ARPlaneAnchor objects…Recognize 3D objects with detectionObjects. Add 3D objects to the scene as ARObjectAnchor objects. See ARKit Webpage, Section Setup -> link class ARSession, Section Responding to Events -> link protocol ARSessionDelegate; func session(ARSession, didAdd: [ARAnchor]) //Tells the delegate that one or more anchors have been added to the session. Examiner's note: ARKit calls delegate's session( _ :didAdd: ) with an anchor, either ARPlaneAnchor or ARObjectAnchor, and each anchor provides details object, like its real-world position and shape), and determining if a quality associated with the data associated with the structure (See ARKit Webpage, Section Virtual Content -> link Content Anchors, Section Surface Detection -> link class ARMeshAnchor //An anchor for a physical object that ARKit detects and recreates virtually using a polygonal mesh -> link ARMeshGeometry; The information in this class holds the geometry data for a single anchor of the scene mesh. Each vertex in the anchor’s mesh represents one connection point. Every three-vertex combination forms a unique triangle called a face…var geometry: ARMeshGeometry 3D information about the mesh such as its shape and classifications.) meets at least one criterion (See ARKit Webpage, Section Setup -> link Configuration Objects -> link ARWorldTrackConfiguration, Section Tracking Surfaces -> link var sceneReconstruction: ARConfiguration.SceneReconstruction; A flag that enables scene reconstruction…When you enable scene reconstruction, ARKit provides a polygonal mesh that estimates the shape of the physical environment. Before setting this property, call supportsSceneReconstruction(_:) to ensure device support. Examiner's note: the supportsSceneReconstruction(_:) is the criterion that has to be met before generation of anchor),
and in response to determining the quality meets the at least one criterion, generating the anchor based on the data associated with the structure (See ARKit Webpage, Section Setup -> link Configuration Objects -> link ARWorldTrackConfiguration, Section Tracking Surfaces -> link var sceneReconstruction: ARConfiguration.SceneReconstruction; A flag that enables scene reconstruction…When you enable scene reconstruction, ARKit provides a polygonal mesh that estimates the shape of the physical environment. See more on ARKit Webpage, Section Virtual Content -> link Content Anchors, Section Surface Detection; {} Tracking and Visualizing Planes //Detect surfaces in the physical environment and visualize their shape and location in 3D space…and Section Physical Objects; {} Visualizing and Interacting with a Reconstructed Scene //Estimate the shape of the physical environment using a polygonal mesh. Examiner’s note: in response to the determination that the device supports scene reconstruction, generate a reconstructed polygonal mesh given ARMeshAnchor).
Regarding claim 3, Apple Developer discloses the method of claim 1, and further discloses in response to determining the location does not include a structure (See ARKit Webpage, Section Virtual Content -> Content Anchors; class ARPlaneAnchor //An anchor for a 2D planar surface that ARKit detects in the physical environment), generating the anchor based on a terrain elevation of the location (See ARKit Webpage, Section Setup -> link Configuration Objects, Section Spatial Tracking -> link ARWorldTrackingConfiguration; Find real-world horizontal or vertical surfaces with planeDetection. Add the surfaces to the session as ARPlaneAnchor objects. Examiner's note: if the interpretation is that ARPlaneAnchor is not considered a structure, then it is a terrain elevation of the location), and communicating the anchor in response to the request (See ARKit Webpage, Section Setup -> link Configuration Objects -> link ARWorldTrackingConfiguration -> link var planeDetection: ARWorldTrackingConfiguration.PlaneDetection; If you enable horizontal or vertical plane detection, the session adds ARPlaneAnchor objects and notifies your ARSessionDelegate, ARSCNViewDelegate, or ARSKViewDelegate object when its analysis of captured video images detects an area that appears to be a flat surface. See ARKit Webpage, Section Virtual Content -> link Content Anchors -> link ARPlaneAnchor; When you enable planeDetection in a world tracking session, ARKit notifies your app of all the surfaces it observes using the device’s back camera. ARKit calls your delegate’s session(_:didAdd:) with an ARPlaneAnchor for each unique surface. Each plane anchor provides details about the surface, like its real-world position and shape)
Regarding claim 5, Apple Developer discloses the method of claim 1, and further discloses wherein the determining of whether the location includes a structure includes: reading data from a data structure (See ARKit Webpage, Section Setup -> link class ARSession, Section Responding to Events -> link ARFrame, Section Tracking and interacting with the real world; var anchors: [ARAnchor] //The list of anchors representing positions tracked or objects detected in the scene), the data indicating whether or not the location includes the structure (See ARKit Webpage, Section Setup -> link Configuration Objects, Section Spatial Tracking -> link ARWorldTrackConfiguration, Section Tracking Surfaces; var planeDetection: ARWorldTrackingConfiguration.PlaneDetection //A value that specifies whether and how the session automatically attempts to detect flat surfaces in the camera-captured image…Section Detecting 3D Objects; var detectionObjects: Set<ARReferenceObject> //A set of 3D objects that the framework attempts to detect in the user’s environment).
Regarding claim 6, Apple Developer discloses the method of claim 1, and further discloses wherein the determining of whether the location includes a structure includes: determining a mesh geometry representation of the location (See ARKit Webpage, Section Virtual Content -> link Content Anchors, Section Surface Detection -> link class ARMeshAnchor -> link ARMeshGeometry -> link ARMeshClassification; When you enable sceneReconstruction on a world-tracking configuration, ARKit provides several mesh anchors (ARMeshAnchor) that collectively estimate the shape of the physical environment. Within that model of the real world, ARKit may identify specific objects, like seats, windows, tables, or walls. ARKit shares that information by exposing one or more ARMeshClassification instances in a mesh’s geometry property), the mesh geometry indicating whether or not the location includes the structure (See more on ARKit Webpage, Section Virtual Content -> link Content Anchors, Section Surface Detection; {} Tracking and Visualizing Planes //Detect surfaces in the physical environment and visualize their shape and location in 3D space…and Section Physical Objects; {} Visualizing and Interacting with a Reconstructed Scene //Estimate the shape of the physical environment using a polygonal mesh).
Regarding claim 7, Apple Developer discloses the method of claim 6, and further discloses wherein the mesh geometry representation of the location includes at least one of a mesh geometry representation of the structure and a mesh geometry representation of a terrain associated with the location (See more on ARKit Webpage, Section Virtual Content -> link Content Anchors, Section Surface Detection; {} Tracking and Visualizing Planes //Detect surfaces in the physical environment and visualize their shape and location in 3D space…and Section Physical Objects; {} Visualizing and Interacting with a Reconstructed Scene //Estimate the shape of the physical environment using a polygonal mesh).
Regarding claim 9, claim 9 is the system claim (See ARKit Webpage, Section Essentials -> link Verifying Device Support and User Permissions; iOS device with an A9 or later processor…device camera) of method claim 1, and is accordingly rejected using substantially similar rationale as to that which is set for with respect to claim 1.
Regarding claim 10, Apple Developer discloses the system of claim 9, and further discloses wherein the anchor is determined based on the data associated with the structure (See ARKit Webpage, Section Setup -> link Configuration Objects, Section Spatial Tracking -> link ARWorldTrackingConfiguration; Find real-world horizontal or vertical surfaces with planeDetection. Add the surfaces to the session as ARPlaneAnchor objects…Recognize 3D objects with detectionObjects. Add 3D objects to the scene as ARObjectAnchor objects. See ARKit Webpage, Section Setup -> link class ARSession, Section Responding to Events -> link protocol ARSessionDelegate; func session(ARSession, didAdd: [ARAnchor]) //Tells the delegate that one or more anchors have been added to the session).
Regarding claim 11, claim 11 has similar limitations as of claim 3, except it is a system claim (See ARKit Webpage, Section Essentials -> link Verifying Device Support and User Permissions; iOS device with an A9 or later processor…device camera), therefore it is rejected under the same rationale as claim 3.
Regarding claim 13, claim 13 has similar limitations as of claim 5, except it is a system claim (See ARKit Webpage, Section Essentials -> link Verifying Device Support and User Permissions; iOS device with an A9 or later processor…device camera), therefore it is rejected under the same rationale as claim 5.
Regarding claim 14, claim 14 has similar limitations as of claim 6, except it is a system claim (See ARKit Webpage, Section Essentials -> link Verifying Device Support and User Permissions; iOS device with an A9 or later processor…device camera), therefore it is rejected under the same rationale as claim 6.
Regarding claim 15, claim 15 has similar limitations as of claim 7, except it is a system claim (See ARKit Webpage, Section Essentials -> link Verifying Device Support and User Permissions; iOS device with an A9 or later processor…device camera), therefore it is rejected under the same rationale as claim 7.
Regarding claim 17, claim 17 is the CRM claim (See ARKit Webpage, Section Essentials -> link Verifying Device Support and User Permissions; iOS device with an A9 or later processor…device camera) of method claim 1, and is accordingly rejected using substantially similar rationale as to that which is set for with respect to claim 1.
Regarding claim 19, claim 19 has similar limitations as of claim 6, except it is a CRM claim (See ARKit Webpage, Section Essentials -> link Verifying Device Support and User Permissions; iOS device with an A9 or later processor…device camera), therefore it is rejected under the same rationale as claim 6.
Regarding claim 20, claim 20 has similar limitations as of claim 7, except it is a CRM claim (See ARKit Webpage, Section Essentials -> link Verifying Device Support and User Permissions; iOS device with an A9 or later processor…device camera), therefore it is rejected under the same rationale as claim 7.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2, 4, 8, 12, 16, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Apple Developer, "ARKit | Apple Developer Documentation,” ARKit, archived April 30, 2022, at the Wayback Machine, https://web.archive.org/web/20220430100339/https://developer.apple.com/documentation/arkit/ in view of Unity Technologies, "Unity - Manual: Level of Detail (LOD) for meshes," Unity Documentation, archived March 14, 2023, on Wayback Machine, https://web.archive.org/web/20230314183152/https://docs.unity3d.com/Manual/LevelOfDetail.html
Regarding claim 2, Apple Developer discloses the method of claim 1, but does not disclose wherein the data associated with the structure includes a level of detail (LOD) of mesh geometries of the structure and the anchor is generated based on the LOD of mesh geometries.
In the same art of interactive 3D graphics for AR scene rendering, Unity Technologies discloses wherein the data associated with the structure includes a level of detail (LOD) of mesh geometries of the structure and the anchor is generated based on the LOD of mesh geometries (Images LOD0-LOD1 and Section LOD Levels; A LOD level is a mesh that defines the level of detail Unity renders for a GameObject’s geometry. When a GameObject uses LOD, Unity displays the appropriate LOD level for that GameObject based on the GameObject’s distance from the Camera).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Unity’s standard LOD practice for meshes associated with GameObject anchors into Apple’s ARKit augmented reality system. Doing so would allow the system to use higher-detail or lower-detail geometry meshes depending on camera position. This yields predictable results in reducing GPU load and power consumption without changing ARKit’s anchoring logic, a routine performance optimization in real-time graphics.
Regarding claim 4, Apple Developer discloses the method of claim 1, and further discloses generating the anchor based on a terrain elevation of the location (See ARKit Webpage, Section Virtual Content -> link Content Anchors -> link ARPlaneAnchor, Section Classifying a Plane; enum Classification //Possible characterizations of real-world surfaces represented by plane anchors. Examiner's note: if ARKit cannot classify a particular face to be a specific object apart of the enumeration of different classes, then ARKit detects the anchor as an ARPlaneAnchor and classifies real-world surfaces), and communicating the anchor in response to the request (See ARKit Webpage, Section Setup -> link Configuration Objects -> link ARWorldTrackingConfiguration -> link var planeDetection: ARWorldTrackingConfiguration.PlaneDetection; If you enable horizontal or vertical plane detection, the session adds ARPlaneAnchor objects and notifies your ARSessionDelegate, ARSCNViewDelegate, or ARSKViewDelegate object when its analysis of captured video images detects an area that appears to be a flat surface).
Apple Developer does not disclose in response to determining the quality does not meet the at least one criterion.
In the same art of interactive 3D graphics for AR scene rendering, Unity Technologies discloses in response to determining the quality does not meet the at least one criterion (Section LOD Group selection bar; The percentage that appears in each LOD level box represents the threshold at which that level becomes active, based on the ratio of the GameObject’s screen space height to the total screen height. For example, if the threshold for LOD 1 is set to 50%, then LOD 1 becomes active when the camera pulls back far enough that the GameObject’s height fills half of the view. Examiner's Note: if the GameObject’s screen-relative height falls below the configured percentage, criterion is not met)
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Unity’s threshold-based control logic into Apple’s ARKit anchor workflow. When the structure quality (e.g., mesh detail) falls below a threshold, the system would fall back to an alternative anchor placement strategy such as terrain or plane-based anchoring. Threshold-based quality gating is a common technique used in real-time graphics optimization and yields the predictable benefit of improved stability and performance in AR applications.
Regarding claim 8, Apple Developer discloses the method of claim 1, but does not disclose wherein the data associated with the structure includes a level of detail (LOD) of mesh geometries of the structure, and the at least one criterion is based on the LOD.
In the same art of interactive 3D graphics for AR scene rendering, Unity Technologies discloses wherein the data associated with the structure includes a level of detail (LOD) of mesh geometries of the structure (Section Set up LOD in Unity; To use LOD, you must have a GameObject with a LOD Group component. The LOD Group component provides controls to define how LOD behaves on this GameObject)., and the at least one criterion is based on the LOD (Par. 1; a GameObject must have a number of meshes with decreasing levels of detail in its geometry. These meshes are called LOD levels. The farther a GameObject is from the Camera, the lower-detail LOD level Unity renders. This technique reduces the load on the hardware for these distant GameObjects, and can therefore improve rendering performance. -> link LOD Group, Section LOD Group selection bar; The percentage that appears in each LOD level box represents the threshold at which that level becomes active, based on the ratio of the GameObject’s screen space height to the total screen height. -> link LOD Bias; LOD levels are chosen based on the onscreen size of an object. When the size is between two LOD levels, the choice can be biased toward the less detailed or more detailed of the two Models available. This is set as a fraction from 0 to +infinity. When it is set between 0 and 1 it favors less detail. A setting of more than 1 favors greater detail. For example, setting LOD Bias to 2 and having it change at 50% distance, LOD actually only changes on 25%).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Unity’s LOD level and its thresholds as criterion into Apple’s ARKit anchoring system. Doing so allows treating higher LODs as structure data and lower LODs as ground data to use more terrain-based placement. Simply using Unity’s existing LOD thresholds as a decision parameter yields the predictable benefit of balancing visual quality against frame rate in AR and improving rendering performance (Unity Technologies Level of Detail (LOD) for meshes; This technique reduces the load on the hardware for these distant GameObjects, and can therefore improve rendering performance. Unity Technologies, LOD Group, Section Transitioning between LOD levels; Smooth transitions between LOD levels improves the player’s experience of your game. As the Camera moves closer or farther away, you don’t want players to see an obvious switchover (sometimes called popping) from the current LOD level to the next), an obvious design choice to use the LOD level of the anchored mesh as the criterion in the ARKit system.
Regarding claim 12, claim 12 has similar limitations as of claim 4, except it is a system claim (See ARKit Webpage, Section Essentials -> link Verifying Device Support and User Permissions; iOS device with an A9 or later processor…device camera), therefore it is rejected under the same rationale as claim 4.
Regarding claim 16, claim 16 has similar limitations as of claim 8, except it is a system claim (See ARKit Webpage, Section Essentials -> link Verifying Device Support and User Permissions; iOS device with an A9 or later processor…device camera), therefore it is rejected under the same rationale as claim 8.
Regarding claim 18, claim 18 has similar limitations as of claim 2, except it is a CRM claim (See ARKit Webpage, Section Essentials -> link Verifying Device Support and User Permissions; iOS device with an A9 or later processor…device camera), therefore it is rejected under the same rationale as claim 2.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNY NGAN TRAN whose telephone number is (571)272-6888. The examiner can normally be reached Mon-Thurs 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JENNY N TRAN/Examiner, Art Unit 2615
/ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615