DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/8/2026 has been entered.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-6, 8-17, and 19-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “non-transient” in claim 1 is a relative term which renders the claim indefinite. The term “non-transient” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The claim recites “determining that the occlusion is non-transient” which acts as a functional trigger for “generating a second user interface,” but the claim fails to provide an objective standard, baseline, or specific temporal threshold to determine the boundary between a “transient” occlusion and a “non-transient” occlusion. One of ordinary skill in the art would not understand the objective duration of time required to satisfy this limitation (e.g., whether “non-transient” requires an occlusion lasting for fractions of a second, several seconds, or permanent physical obstruction. In the context of a dynamic, real-time augmented reality environment, a “transient” occlusion could be interpreted subjectively as lasting anywhere from a few milliseconds (to a high-speed processor processing real-time camera and AR placement data) to several seconds (to a human user) to any user defined threshold in the context of use. Without any defined or objective standard to measure this relative terminology, the metes and bounds of the claim cannot be reasonably ascertained.
Relative terminology is of course not inherently indefinite, however when a claim uses a term of degree, the specification must provide some standard for measuring that degree so that a person having ordinary skill in the art (PHOSITA) can reasonably ascertain the metes and bounds of the claimed invention. Here, even when read in light of the Specification at paragraphs 0070-0074 containing teachings relating to determination of an occlusion as “non-transient”, the metes and bounds of “non-transient” or “transient” remain unclear to a PHOSITA. The Specification attempts to clarify the term in paragraph 0071 by stating that if an occlusion “is unmoved and the occlusion lasts longer than a defined period of time (e.g., two seconds), the AR system 400 may categorize the occlusion as non-transient” thus at least providing an example of when an object may be considered non-transient, but also introducing another condition such that this does not help in providing PHOSITA with the necessary delineation between transient and non-transient that would render the claim definite. Rather for this facially subjective term, as in MPEP 2173, “the definiteness requirement is not satisfied by merely offering examples that satisfy the term within the specification”. By using permissive language (“may”) and offering an arbitrary, non-binding example (“e.g., two seconds”), the Specification fails to establish a definitive metric or formula or objective standard. Thus the claim is rendered indefinite.
Note that claims 12 and 22 recite the same indefinite claim language and are rejected for the same reasons as claim 1 above. Furthermore all dependent claims are rejected for carrying through this deficiency of the parent claims without curing it with their own further limitations.
In the interest of compact prosecution, the Examiner will interpret that an occlusion is considered non-transient if the occlusion lasts for a duration sufficient to be considered more than merely momentary, fleeting, or instantaneous (all being examples of meanings of transient), thereby warranting a response from the system, such that with respect to the system the occlusion can be considered non-transient.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-6, 8, 10-17, 19 and 21-24 is/are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by Pandey et al1 (“Pandey”).
Regarding claim 1, as rendered definite as explained above, Pandey teaches a computer-implemented method comprising (see Pandey, paragraphs 0033-0042 teaching “a system 100 configured to display virtual objects using an HMD to a user” and “system 100 may be implemented using a computerized device, such as computer system 900 of FIG. 9. The modules of system 100 may be combined or divided into fewer or greater numbers of modules. Multiple modules may be implemented by a computerized device. For instance, virtual object manager 150 may be implemented as instructions executed by a computerized device (e.g., a processor, computer system)” such that the method as further explained below is computer-implemented as by some “computerized device” for example):
detecting an object in a real-world space within a visual field of a device, a view of the real-world space captured by a sensor and displayed by the device (note that an “object” in the field of computer vision can be considered entity of interest in a scene and may be any collection of surfaces or regions identified in a scene and for example an object can comprise multiple objects where for example a collection of detected surfaces and/or objects in a scene may be considered an object in a real-world space where such an object is treated collectively by the system as the relevant object for AR placement of items to define some UI for a view of the real-world space; see Pandey, paragraphs 0033-0039 teaching “System 100 may include image capture module 110, object, color, and brightness identification and tracking module 120, display module 130, motion/focus tracking module 135, user interface module 140, virtual object manager 150, user preference module 160, and power supply module 170. Other embodiments of system 100 may include fewer or greater numbers of components. System 100 may be or may be part of an augmented reality device. Such an AR device may be worn or otherwise used by a user” and “image capture module 110 may include an image capture device that is positioned to capture a field-of-view of a real-world scene that is being viewed by a user” and “images may be processed to determine the location of various objects within the real-world scene” where further “Images captured by image capture module 110 may be passed to object, color, and brightness identification and tracking module 120. Object, color, and brightness identification and tracking module 120 may perform multiple functions. First, the colors present within the real-world scene may be identified. For one or more regions of the real-world scene, the colors and/or the predominant color may be identified. Second, the brightness of regions of the real-world scene may be identified. For one or more regions of the real-world scene, the brightness level may be determined. Third, real-world objects within the scene may be classified and tracked. For instance, the position of persons, faces, screens, text, objects with high color and/or texture variability, floors, ceilings, walls, regions of a similar color and/or brightness level, and/or other objects within the real-world scene may be identified and/or tracked. For the purposes of this disclosure, any visible item in the real-world scene may be considered a real-world object. The color, brightness, and/or classification of real-world objects within the real-world scene may be evaluated to determine how virtual objects should be presented to the user” such that here a collection of “visible items” comprise the scene and for example these visible items such as colors, regions, and surfaces which may be further classified into identified “objects” comprise a scene object with potential areas for augmentation by AR graphics and this collection of visible items as evaluated are passed to the “virtual object manager 150” where then “Based on input from virtual object manager 150, the color, position, and/or brightness of virtual objects may be set and/or modified” and note that for example the view of the real-world space is displayed by the display device as mediated through the AR display as taught where “display module 130 may include a projector that either projects light directly into one or both eyes of the user or projects the light onto a reflective surface that the user views. In some embodiments, the user wears glasses (or a single lens) onto which light is projected by the display module 130. Accordingly, the user may view virtual objects and real-world objects present in the scene simultaneously”);
determining a non-overlay-acceptable region of the object on which augmented reality (AR) content is never to be overlaid (see Pandey, paragraphs 0033-0039 as explained above where the object comprises the visible items as detected and sent to the virtual object manager and where “Based on input from virtual object manager 150, the color, position, and/or brightness of virtual objects may be set and/or modified” and “Virtual object manager 150 may access a user preference module 160 for use in determining the appropriate color, position, and/or brightness level to use for virtual objects. Virtual object manager 150 may receive input from object, color, and brightness identification and tracking module 120. For instance, based on the objects, colors, and/or brightness identified by identification and tracking module 120, virtual object manager 150 may recolor, adjust brightness, and/or re-position virtual objects” and as further taught in paragraph 0056 and Table 1 “virtual objects may be displayed over regions with the least priority” such that for example “If no blank space is present within a real-world scene or the blank space has already been superimposed with virtual objects, the next-lowest priority region may be used” and there may be regions of the scene object classified such as “Text” and “User’s body” such that “certain priority levels may never be superimposed with virtual objects. For instance, certain classifications may be set to never be superimposed with virtual objects. In the example of table 1, the user's own body and text present in the real-world scene are never superimposed with virtual objects. The user, via user preferences, may specify classifications which are never superimposed with virtual objects” and are thus non-overlay-acceptable regions);
generating a first user interface associated with the object for display by the device, the first user interface including first AR content arranged in a first layout (note that a “user interface associated with the object” with “AR content arranged in a first layout” is considered to be any view of the object in the AR system as an AR view here is necessarily a user interface as the view literally is the interface to the system and if such user interface has augmented reality (AR) content arranged in the UI then the claim language is met; thus see Pandey, paragraphs 0033-0039 as explained above where “Based on input from virtual object manager 150, the color, position, and/or brightness of virtual objects may be set” and “the color, position, and/or brightness of the virtual objects may be controlled via virtual object manager 150. Virtual object manager 150 may access a user preference module 160 for use in determining the appropriate color, position, and/or brightness level to use for virtual objects. Virtual object manager 150 may receive input from object, color, and brightness identification and tracking module 120” such that this input from “module 120” is the object analyzed for display and the setting of the virtual objects generates a first user interface including AR content arranged in a first layout such as for example a layout as described in paragraphs 0043-0056 and as in figures 2A and 2B where “locations, colors, and/or positions of the virtual objects 250 may be determined based on the real-world objects present within the scene (and as present in images captured by the AR device)” where “priority” can establish a layout based on the view captured of the real-world scene as where “virtual objects may be displayed over regions with the least priority. As such, if present in a scene viewed by a user, blank space may be used to present virtual objects. If no blank space is present within a real-world scene or the blank space has already been superimposed with virtual objects, the next-lowest priority region may be used. In the example of Table 1, any furniture (e.g., tables, chairs) may be superimposed with virtual objects. If none are present or the furniture that is present has already been superimposed with virtual objects, display devices, if present, may be superimposed with virtual objects. In some embodiments, certain priority levels may never be superimposed with virtual objects. For instance, certain classifications may be set to never be superimposed with virtual objects. In the example of table 1, the user's own body and text present in the real-world scene are never superimposed with virtual objects. The user, via user preferences, may specify classifications which are never superimposed with virtual objects” and thus when any given arrangement of the AR content is displayed to the user with respect to the object according to the virtual object manager’s control then a first user interface is generated) which avoids the non-overlay-acceptable region (see Pandey, paragraph 0056 as explained above where there may be regions of the scene object classified such as “Text” and “User’s body” such that “certain priority levels may never be superimposed with virtual objects. For instance, certain classifications may be set to never be superimposed with virtual objects. In the example of table 1, the user's own body and text present in the real-world scene are never superimposed with virtual objects. The user, via user preferences, may specify classifications which are never superimposed with virtual objects” and thus the layout set above would avoid the non-overlay-acceptable region as this is explicitly against the control rules ) and
responsive to a change in the visual field of the device introducing a constraint relating to viewing of the first user interface, the constraint including an occlusion, by a second object in the real-world space, of a portion of the object on which at least some of the first AR content was overlaid, thereby causing the at least some of the first AR content to disappear; and determining that the occlusion is non-transient, generating a second user interface associated with the object for display by the device (note that a constraint relating to viewing of the first user interface that has been introduced by a change in the visual field of the device may be any may be anything that constrains or restricts or otherwise limit viewing of the first user interface which is arranged with AR content in a first layout and thus for example if the visual field changes such as through content in the visual field changing through change in the content or change in device position, pose, or perspective, then any such change could constrain or otherwise limit viewing of the first UI; furthermore note that while the claim now recites "causing the at least some of the first AR content to disappear," the claim does not specifically limit nor define exactly what "disappear" is in reference to nor how long something must disappear for, nor from where it specifically disappears from and thus if something no longer appears where it was positioned then it must be considered to have disappeared from that position, and for example if a virtual object is moved from a certain placement on an object which is now obscured to another non-obscured region, then it has disappeared from that location only to reappear later in a different position accommodating the occlusion ; thus see Pandey, paragraph 0039 establishing “Virtual object manager 150 may serve to adjust the color, brightness, and/or position of virtual objects that are displayed to a user via display module 130” and “based on the objects, colors, and/or brightness identified by identification and tracking module 120, virtual object manager 150 may recolor, adjust brightness, and/or re-position virtual objects” such that here any adjusting of the virtual objects displayed to the user is a generating of a second user interface associated with the object as for example a “recolor, adjust brightness, and/or re-position virtual objects” operation(s) generates another, second user interface associated with the object according to any constraints which the virtual object manager recognizes with respect to any change in the visual field where this may be a restraint including an occlusion of at least some of the first AR content arranged in whatever is a first layout as set above such as in paragraphs 0049-0052” teaching “the locations, colors, and brightness used to project virtual objects may vary based on a particular real-world scene and/or user preferences” and “virtual objects being moved to regions of low priority for display to the user” where specifically for example “if the user extends his arm and/or looks down, some portion of the user may be visible, such as the user's hand and/or arm. When the user is looking at or near his hand or arm, the user may desire to watch what he is doing (such as handling or manipulating an object). As such, virtual objects may be made transparent, blurred, moved and/or resized to allow the user to clearly view the user's hand and/or arm. As such, the user's own body may be afforded a high priority. The user's hand and/or arm may be detected via skin tone recognition. Once a user has completed handling or manipulating an object, virtual objects may be enlarged, repositioned, sharpened, and/or brightened to occupy at least some of the scene previously occupied by the user's hand and/or arm” such that here the “high priority” of the “user’s hand and/or arm” produces an occlusion that interferes with the “desire to watch what he is doing (such as handling or manipulating an object)” and then “virtual objects may be made transparent, blurred, moved and/or resized to allow the user to clearly view the user's hand and/or arm” such that the virtual objects as adjusted and for example “moved” then comply with the rule to never overlay such a high priority region if it is in the visual field of the device introducing a constraint necessitating change of the first user interface to the adjusted user interface following the priority rules; furthermore see Pandey, paragraph 0051 as explained above where when the virtual content is moved from the first object and region it was overlaying on the first object, due to the detection of the occlusion of the object by a second hand object, it disappears from the occluded region and is moved/re-positioned to another suitable region which creates the second user interface with the content disappeared from the first region and moved to another region and note that the content may reappear from the area it disappeared from “once a user has completed” the occluding behavior and the content is ”repositioned” to appear “to occupy at least some of the scene previously occupied by the user’s hand and/or arm”; furthermore note that paragraph 0051 teaches as well the content may disappear as “virtual objects may be made transparent” in response to the user occluding the previously overlaid object “to allow the user to clearly view the user’s hand” such that here within the first AR content provided to the viewer then at least the portion that is being occluded by the user’s arm may disappear through being made transparent such that the remaining virtual objects which are not occluded would continue to be displayed according to their priority values and placement rules and would form the second user interface; finally, in relation to determining that the occlusion is non-transient, an occlusion is considered non-transient if the occlusion lasts for a duration sufficient to be considered more than merely momentary, fleeting, or instantaneous (all being examples of meanings of transient), thereby warranting a response from the system, and thus in Pandey as in paragraph 0051 the occluding hand is detected for the duration of time that “handling or manipulating an object” takes such that during this detectable duration the hand is considered more than momentary, fleeting, or instantaneous as it warrants a response from the system lasting for the duration defined by the amount of time the occluding object occludes the original position of the virtual object in the first UI, and due to this determination that the hand object is non-transient the system generates the second UI; further note that Pandey detects for the object and responsively determines the second UI presentation and presents the second UI during the period of handling/manipulation, and then returns the presentation of the objects to the original positions, such that this continuous monitoring for the hand/object presence until the action is completed and the occlusion is gone also shows that the system has determined the object to be more than transient such that it must respond during the sustained non-transient period of occlusion), the second user interface including second AR content arranged in a second layout which avoids the non-overlay-acceptable region, and the second layout being different from the first layout (see Pandey, paragraphs 0049-0052 as explained above teaching “the locations, colors, and brightness used to project virtual objects may vary based on a particular real-world scene and/or user preferences” and “virtual objects being moved to regions of low priority for display to the user” where specifically for example “if the user extends his arm and/or looks down, some portion of the user may be visible, such as the user's hand and/or arm. When the user is looking at or near his hand or arm, the user may desire to watch what he is doing (such as handling or manipulating an object). As such, virtual objects may be made transparent, blurred, moved and/or resized to allow the user to clearly view the user's hand and/or arm. As such, the user's own body may be afforded a high priority. The user's hand and/or arm may be detected via skin tone recognition. Once a user has completed handling or manipulating an object, virtual objects may be enlarged, repositioned, sharpened, and/or brightened to occupy at least some of the scene previously occupied by the user's hand and/or arm” such that again here the second layout of the virtual objects is the virtual objects as re-positioned and possible re-colored or re-sized as explained above and furthermore this avoids the non-overlay acceptable regions delineated above as well as deals with the introduced constraint such that the arrangement still avoids all non-overlay-acceptable regions of “high priority”; furthermore as explained above the second user interface as explained above in relation to Pandey’s teachings at paragraph 0051 could be a second user interface in which the first AR content has disappeared from its previously occupied area to reappear at a repositioned location different from the first location or could be a second user interface in which the content has been made to disappear through being made transparent such that here within the first AR content provided to the viewer then at least the portion that is being occluded by the user’s arm may disappear through being made transparent such that the remaining virtual objects which are not occluded would continue to be displayed according to their priority values and placement rules and would form the second user interface).
Regarding claim 2, Pandey teaches all that is required as applied to claim 1 above and further teaches determining an overlay-acceptable region of the object, wherein the first layout includes the first AR content overlaid on the overlay-acceptable region of the object (note that the claim does not define any manner in which overlay-acceptable regions of objects are determined; see Pandey, paragraphs 0033-0039 as explained above where “Based on input from virtual object manager 150, the color, position, and/or brightness of virtual objects may be set” and “the color, position, and/or brightness of the virtual objects may be controlled via virtual object manager 150. Virtual object manager 150 may access a user preference module 160 for use in determining the appropriate color, position, and/or brightness level to use for virtual objects. Virtual object manager 150 may receive input from object, color, and brightness identification and tracking module 120” such that this input from “module 120” is the object analyzed for display and the setting of the virtual objects generates a first user interface including AR content arranged in a first layout such as for example a layout as described in paragraphs 0043-0056 and as in figures 2A and 2B where “locations, colors, and/or positions of the virtual objects 250 may be determined based on the real-world objects present within the scene (and as present in images captured by the AR device)” where “priority” can establish a layout based on the view captured of the real-world scene as where “virtual objects may be displayed over regions with the least priority. As such, if present in a scene viewed by a user, blank space may be used to present virtual objects. If no blank space is present within a real-world scene or the blank space has already been superimposed with virtual objects, the next-lowest priority region may be used. In the example of Table 1, any furniture (e.g., tables, chairs) may be superimposed with virtual objects. If none are present or the furniture that is present has already been superimposed with virtual objects, display devices, if present, may be superimposed with virtual objects. In some embodiments, certain priority levels may never be superimposed with virtual objects. For instance, certain classifications may be set to never be superimposed with virtual objects. In the example of table 1, the user's own body and text present in the real-world scene are never superimposed with virtual objects. The user, via user preferences, may specify classifications which are never superimposed with virtual objects” such that here areas of lower priority may be considered overlay-acceptable regions of the object where the first AR content can be overlaid on such a region of the object where for example figure 2A and 2B show examples of virtual AR objects overlaid on overlay-acceptable regions of the object and various objects and regions of the scene object as determined and set by the virtual object manager ).
Regarding claim 3, Pandey teaches all that is required as applied to claim 1 above and further teaches wherein the second AR content includes at least some of the first AR content (see Pandey, paragraphs 0049-0052 as explained above teaching “the locations, colors, and brightness used to project virtual objects may vary based on a particular real-world scene and/or user preferences” and “virtual objects being moved to regions of low priority for display to the user” where specifically for example “if the user extends his arm and/or looks down, some portion of the user may be visible, such as the user's hand and/or arm. When the user is looking at or near his hand or arm, the user may desire to watch what he is doing (such as handling or manipulating an object). As such, virtual objects may be made transparent, blurred, moved and/or resized to allow the user to clearly view the user's hand and/or arm. As such, the user's own body may be afforded a high priority. The user's hand and/or arm may be detected via skin tone recognition. Once a user has completed handling or manipulating an object, virtual objects may be enlarged, repositioned, sharpened, and/or brightened to occupy at least some of the scene previously occupied by the user's hand and/or arm” such that again here the second layout of the virtual objects is the virtual objects as re-positioned and possible re-colored or re-sized as explained above and for example in the case that they are re-positioned then the second AR content includes at least some of the first AR content which is the first AR content repositioned into second AR content).
Regarding claim 4, Pandey teaches all that is required as applied to claim 1 above and further teaches wherein the constraint is a first constraint, and wherein the method further comprises: responsive to another change in the visual field of the device introducing a second constraint relating to viewing of the second user interface (note that the constraint as identified above in the independent claim may be considered a first constrain which was the introduction of an identified region in the field of view which is high priority and occludes the view of the user with virtual objects that overlap with the occluding hand/arm of the user such as in paragraphs 0051-0052 teaching an example of an initial layout of AR content which is changed responsive to a change in the visual field such as a hand and AR content occluding each other, and with regard to other changes in visual field, see Pandey, paragraphs 0030-0032 teaching “a priority can be assigned to different regions of a real-world scene based on objects identified in the real-world scene and/or by tracking the user's eye movements. For instance, in a real-world scene, faces, text (e.g., books, magazines), and electronic devices may be likely of interest to the user and may be assigned a high priority. Other regions of the real-world scene, such as ceilings, floors, table tops, and walls may likely be of less interest to a user. The AR device may determine a priority for each of these regions and may superimpose virtual objects for display in the lowest priority regions. As an example, if a user is viewing an email application (in this example, the virtual object) with the AR device and the user looks down at a magazine, the email application may be positioned for display by the AR device's HMD so that text and graphics of the magazine are not obscured by the email application” and “manipulation of virtual objects may be performed by a “window manager.” Virtual objects, which represent the information being presented to the user via the virtual field-of-view (FoV) of the HMD of the AR device, may be repositioned within the virtual FoV in response to the priority of the different regions of the real-world scene being viewed by the user. As such, the execution of each application may be unaffected, rather only the position (and, possibly, brightness and/or color) within the virtual FoV, as controlled by the window manager, may be modified” such that here the virtual object manager is responsive to all changes in the visual field of the device in order to perform repositioning based on priority levels where such changes to the visual field include any manner in which the visual field captured changes from a previous visual field such that as images are captured of the visual field then constraints on the placement of the virtual object are introduced as every region of the scene object must be evaluated by the virtual object manager to determine the optimal placement and presentation of the virtual objects and this search for constraints is continuously taking place as in paragraphs 0034-0035 teaching “image capture module 110 may include an image capture device that is positioned to capture a field-of-view of a real-world scene that is being viewed by a user. Image capture module 110 may include one or more cameras. The camera may be pointed such that it captures images of a scene viewed by the user. Image capture module 110 may capture images rapidly. For instance, multiple frames may be captured by image capture module 110 every second. Some or all of these images may be processed to determine the location of various objects within the real-world scene, such as persons and their identities” and paragraph 0101 teaching “As the user's view of the real-world scene changes, method 700 may be repeated to identify the regions now present in a new image of the real-world scene and, possibly, reposition one or more virtual objects. As such, as a user's real-world view changes, the position of the virtual objects may be modified based on new regions and their associated priorities” – thus it is clear that in general the Pandey system generates new user interfaces of AR content responsive to changes in the visual field introduced by any change in the visual field and may constantly provide an arbitrary number of changed/reconfigured user interfaces of AR content to conform to the priority rules that must be followed such that the visual field and what is recognized within at any point in time introduces constraints on placement of the AR content – furthermore as explained above as in paragraphs 0051-0052 an initial first constraint is dealing with introduction of an occlusion in the visual field of the user and this could be in combination with any other subsequent constraints that can be introduced in connection with such change in visual field as the virtual object manager rearranges/changes the AR content to accommodate any new constraint and as the occlusion is described as an ongoing process then any movement of the content to continue to accommodate a new position of the occluding item could then be considered introduction of a second constraint or for example if such change to the visual field causes regions of priority to change overlay-acceptability (see Pandey, paragraph 0050 teaching “It should be understood that alternatively or additionally, the size of a virtual object may also be adjusted. For instance, in order to fit within a region of low importance, the size of the virtual object may be decreased. If the region of low importance grows, the size of the virtual object may be grown to occupy a greater portion of the region of low importance” where of course the same principle holds and as in paragraph 0097 “various portions of the real-world scene, as captured in the image of step 710, may be identified. Each region may be required to be at least a minimum, predefined size (e.g., large enough to contain a virtual object)”) then again this introduces another constraint which can be considered a second constraint necessitating a third UI or finally for example if the occlusion no longer appears in the field of view such as “Once a user has completed handling or manipulating an object, virtual objects may be enlarged, repositioned, sharpened, and/or brightened to occupy at least some of the scene previously occupied by the user's hand and/or arm” then this change to the field of view is the removal of the occlusion which introduces second constraints resulting in then “virtual objects may be enlarged, repositioned, sharpened, and/or brightened to occupy at least some of the scene previously occupied by the user's hand and/or arm”), generating a third user interface associated with the object for display by the device, the third user interface including third AR content arranged in a third layout which avoids the non-overlay acceptable region, and the third layout being different from the first layout and the second layout (see Pandey, paragraphs 0050-0052 teaching in connection with the other portions of Pandey as explained above an example of a third UI being generated where “virtual objects may be enlarged, repositioned, sharpened, and/or brightened to occupy at least some of the scene previously occupied by the user's hand and/or arm” and this contains the virtual objects as modified to deal with the constraints introduced by the new visual field in a third layout different from the first and second layout as “at least some of the scene” may now be modified to deal with the new space made by the absence of the occluding constraint; additionally, as explained above, it should be understood that Pandey teaches in general that any number of new changed user interfaces may be generated in response to changes in the visual field as for example paragraphs 0095-0101 teaches this concept where “As the user's view of the real-world scene changes, method 700 may be repeated to identify the regions now present in a new image of the real-world scene and, possibly, reposition one or more virtual objects. As such, as a user's real-world view changes, the position of the virtual objects may be modified based on new regions and their associated priorities” and in each case the layout avoids non-overlay acceptable regions and is different to accommodate the optimal setting of the virtual content in relation to the field of view at any given time).
Regarding claim 23, Pandey teaches all that is required as applied to claim 4 above and further teaches wherein the second constraint includes a size of the object as displayed on the device is smaller than a defined minimum size (see Pandey, paragraphs 0095-0101 as noted above teaching “As the user's view of the real-world scene changes, method 700 may be repeated to identify the regions now present in a new image of the real-world scene and, possibly, reposition one or more virtual objects. As such, as a user's real-world view changes, the position of the virtual objects may be modified based on new regions and their associated priorities” and furthermore a constraint that can be introduced can include a size of the object as displayed on the device is smaller than a defined minimum size and thus is no longer suitable for placement as it loses its priority status to other areas that may appear for example “At step 740, a position to display a virtual object is selected based on the multiple regions defined at step 720 and the priorities assigned at step 730. The lowest priority region (which corresponds to the lowest priority real-world object(s) in the scene) may be selected for a virtual object to be superimposed over. It may be determined whether the region is large enough for the virtual object to fit (virtual objects may vary in size). If not, the next lowest priority region may be selected” where regions identified as overlay acceptable must always conform to a minimum size and thus if an object region changes size due to a user perspective change such that it is no longer suitable for display then this becomes part of the non-overlay acceptable region of the object and the virtual object manager thus repositions and/or re-colors and/or re-sizes AR content such that it is third AR content based on some area now being smaller than that defined minimum size for object placement).
Regarding claim 5, Pandey teaches all that is required as applied to claim 23 above and further teaches wherein the second layout includes the second AR content overlaid on the object, and wherein the third layout includes the third AR content not overlaid on the object (see, here it must be noted that Pandey teaches the object comprised of the detected relevant regions for display such that for example while figures 2A and 2B show an entire room filled with recognized regions and virtual objects of some specific type, of course any view of such a scene or any scene may serve as the objects recognized for placement of the AR content in conjunction with the priority rules as the view of the user changes and causes new constraints on the placement of AR content on the region and thus for example if some overlay acceptable region of a wall changes based on new information in the field of view that necessitates a change in placement/presentation then any new arrangement of the content constitutes another user interface of the content; for example in the case where a user has content initially placed as in figures 2A-2B then the position of the AR content such as 250-2 is placed on the floor object as well as the wall object of the recognized objects/regions and in the case that a second layout is needed as in paragraphs 0050-0052 to avoid a first occlusion restraint then according to the principles of Pandey taught above if for example the occlusion of the user arm interferes with the content arranged on the floor object then a second user interface repositions the content to an overlay-acceptable region such as the wall object/region as part of recognized object/region 260-5 such that the second layout includes the second AR content arranged overlaid on a wall object and as contemplated above a third layout may be the result of any change in view of the user according to the teachings of paragraphs 0095-0101 as explained above where “As the user's view of the real-world scene changes, method 700 may be repeated to identify the regions now present in a new image of the real-world scene and, possibly, reposition one or more virtual objects. As such, as a user's real-world view changes, the position of the virtual objects may be modified based on new regions and their associated priorities” and such a change in view could be change of the occluding portion changing position/size in the FoV introducing more constraints on the objects upon which overlay is acceptable where for example then it follows that for example if an initial occlusion causes a portion of a virtual object as displayed on the floor object as in figure 2A and for example is introduced from the left side, then a reposition could occur to shift the virtual object such that it continues to overlay the floor object and then if the occlusion is changed to another position fully occupying the floor then a third layout includes that second content not overlaid on the object as it could be repositioned to solely the wall object which is overlay acceptable ).
Regarding claim 6, Pandey teaches all that is required as applied to claim 5 above and further teaches wherein the third layout further includes the third AR content in a collapsed form compared to the first AR content (see Pandey, paragraphs 0049-0052 teaching that content may be collapsed relative to its previous form such that it appears in a collapsed form to a user where something collapsed is something which has been compressed or reduced in some manner to fit in a smaller form than previously presented as “It should be understood that alternatively or additionally, the size of a virtual object may also be adjusted. For instance, in order to fit within a region of low importance, the size of the virtual object may be decreased. If the region of low importance grows, the size of the virtual object may be grown to occupy a greater portion of the region of low importance” and for example in response to constraints “virtual objects may be made transparent, blurred, moved and/or resized” and “some or all virtual objects may be reduced in size such that the person's face is not obscured”).
Regarding claim 8, Pandey teaches all that is required as applied to claim 1 above and further teaches wherein the second layout includes the second AR content not overlapping the occluded portion of the object (see Pandey, paragraphs paragraphs 0049-0052 as explained above teaching “the locations, colors, and brightness used to project virtual objects may vary based on a particular real-world scene and/or user preferences” and “virtual objects being moved to regions of low priority for display to the user” where specifically for example “if the user extends his arm and/or looks down, some portion of the user may be visible, such as the user's hand and/or arm. When the user is looking at or near his hand or arm, the user may desire to watch what he is doing (such as handling or manipulating an object). As such, virtual objects may be made transparent, blurred, moved and/or resized to allow the user to clearly view the user's hand and/or arm. As such, the user's own body may be afforded a high priority. The user's hand and/or arm may be detected via skin tone recognition. Once a user has completed handling or manipulating an object, virtual objects may be enlarged, repositioned, sharpened, and/or brightened to occupy at least some of the scene previously occupied by the user's hand and/or arm” such that here the second layout has the AR content not overlapping the occluded portion of the object).
Regarding claim 10, Pandey teaches all that is required as applied to claim 4 above and further teaches wherein the second constraint includes a change in at least one of roll, pitch, or yaw of the object in relation to the device (note that a roll pitch or yaw of the object in relation to the device would cover any manner of such a change in any of these orientations of the object in relation to the view of the device and thus if an object or region changes roll, pitch, or yaw then this is a change in roll, pitch, or yaw in relation to the device as would a change in roll, pitch, or yaw as appearing to the user if the user device changes relative to the object; see Pandey, paragraphs 0049-0052 and 0095-0101 as explained above where a user’s view changes in the 3D dimensional space as the objects/regions are tracked and this would include situations in which the pitch, yaw, or roll of the object changes as the user changes perspective relative to the object, or of course a suitable object or surface may change in which case the roll, pitch, or yaw of the device object would change the visual field and introduce a constraint as this would simply change the candidate regions for display).
Regarding claim 11, Pandey teaches all that is required as applied to claim 1 above and further teaches wherein detecting the object includes identifying that the object is in one state of a set of defined states, and wherein at least some of the AR content is associated with the one state (see Pandey, paragraphs 0095-0101 teaching to identify various states of the object to determine priority including size, color, brightness, occlusions, etc and the AR content is associated with these states through priority and virtual object manager control).
Regarding claims 12-17, 19 ,21, and 24 the instant claims recite an apparatus in the form of a “system” which comprises basic computing parts such as “at least one processor; and a memory storing processor-executable instructions that, when executed, cause the at least one processor to” perform the functions as in claims 1-6, 8,10, and 23, respectively. Pandey already teaches such a system (see Pandey, figure 9) and teaches the functionality as explained in the rejections above. In light of this, the limitations of claims 12-17, 19 ,21, and 24 correspond to the limitations of claims 1-6, 8,10, and 23, respectively; thus they are rejected on the same grounds as claims 1-6, 8,10, and 23, respectively.
Regarding claim 22, the instant claim recites an apparatus of the form “non-transitory computer readable medium having stored thereon computer-executable instructions that, when executed by a computer, cause the computer to perform operations” where the operations performed are those as in claim 1 above. Pandey teaches such an apparatus (see Pandey, paragraphs 0114-0126). In light of this, the limitations of claim 22 correspond to the limitations of claim 1; thus it is rejected on the same grounds as claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 9 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pandey in view of Diament et al2 (“Diament”).
Regarding claim 9, Pandey teaches all that is required as applied to claim 4 but fails to teach wherein the second constraint includes a size of the object as displayed on the device being larger than a defined maximum size. Rather, while Pandey teaches that size is a consideration in determining the position and other properties of content overlaid on the object, it is silent with respective to any defined maximum size of the object being introduced as a constraint.
Thus Pandey stands as a base device upon which thew claimed invention can be considered an improvement through consideration of some defined maximum size of an object as displayed on a display device as a constraint on generating another version of AR content arranged in another layout due to some change in view introducing such a constraint which would allow further tailoring of a virtual content in relation to changed content in a FOV to provide a more optimized presentation of AR content.
In the same field of endeavor relating to the placement of AR content onto objects whereby AR content may be adjusted in response to changes in view related to the object, Diament teaches to consider a defined maximum size of an object as displayed as an object for AR content overlay as a constraint for generating a different layout of AR content and changing the AR content layout to fill an overlaid content area as its maximum size for displaying a certain amount of content in a certain form changes to allow further display of AR content in another generated user interface of the content overlaid on a relevant object (see Diament, paragraphs 0060-0065 teaching “distinctions between forms 406-1 and 406-2 shown in FIGS. 4C and 4D illustrate a few of these differences that render first and second forms of an overlay object distinct from one another. Specifically, as shown, form 406-1 is depicted to have three lines of text that are all the same size, while form 406-2 is depicted to have one line of text (“Text 01”) that is notably larger than the others, like a title. Moreover, while form 406-1 is limited to three lines of text due to being tailored to the relatively small size of augmentable object 210 in field of view 402-C, form 406-2 includes eight lines of text and a graphic due to being tailored to the relatively large size of augmentable object 210 in field of view 402-D” and “As fields of view 402-A and 402-B in FIGS. 4A and 4B illustrate, it may be beneficial for the overlay object to change and adapt in accordance with the changing of augmentable object 210 within the field of view” and for example “As another example, one or more new lines of text or new graphics may be added one at a time to form 406-1 as space becomes available on the overlay object (e.g., as augmentable object 210 comes closer and grows in size) until all of the elements of form 406-2 are in place” such that here a defined maximum size of an object as displayed can fit a maximum of amount of information in a certain size and as the defined maximum size grows as displayed then the content can also change in size and amount of next or new graphics as this maximum space becomes available thus introducing a second constraint which causes another AR content layout). Thus Diament provides a known technique applicable to the base system of Pandey.
Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the system of Pandey to include the know technique of Diament as doing would yield predictable results and result in an improved system. The predictable result of the combination would be that AR content arrangement and layout would continue to operate as in Pandey with an additional type of constraint as taught by Diament contributing further layout tailoring such that the AR content arrangement may be considered along with a maximum display size of an object for overlay as viewed by a user and where thus such repositioning and resizing as in Pandey could then include such adaptations as Diament. This would result in an improved system as for example using different forms of overlay may help to provide a more efficient manner of providing AR information overlaid onto objects as suggested by Diament (see Diament, paragraphs 0061-0062 teaching “To remedy this potential inefficiency, system 100 may thus be configured to use different forms 406 of an overlay object, as shown in FIGS. 4C and 4D. Rather than projecting the same three lines of text that made sense to display in form 406-1 of the overlay object, system 100 may project a more detailed, helpful, and tailored form of the overlay object for the relatively close proximity of augmentable object 210 shown in field of view 402-D. As such, FIG. 4D shows that a much greater amount of text may be included within the overlay object, as well as a graphic. In this way, the visual field of user 202 may be used more efficiently than in the example of FIGS. 4A and 4B to provide an appropriate level of detail related to augmentable object 210 based on the apparent proximity of augmentable object 210”).
Note that similar reasoning is applied to similar dependent claim 20.
Response to Arguments
Applicant's arguments filed 12/9/2025 have been fully considered but they are not persuasive.
Applicant first argues that claim 1 requires “detecting an occlusion, by a second real-world object, of a portion of the object on which at least some of the first AR content was overlaid, that causes some of the first AR content to disappear; and second a determination that the occlusion is not transient” and that “[o]nly after both conditions are satisfied does the claimed system generate the second user interface.” Applicant then cites paragraph 0070 of the Specification as at least appearing to teach “system 400 may determine that the occlusion is non-transient“ but this does not explain how or what is meant by non-transient, instead using different language of “recognizing that the occlusion is temporary” but then there is no definition of what “temporary” would mean in relation to “non-transient. Applicant then argues that the “application further gives an example of when an occlusion may be determined to be non-transient, describing that when an occlusion persists – “if the occluding object is unmoved and the occlusion lasts longer than a defined period of time (e.g., two seconds), the AR system 400 may categorize the occlusion as non-transient, and may respond by generating the occlusion as non-transient” and that as such “[n]othing in Pandey discloses or suggests that the system evaluates the persistence of an occlusion or whether the occlusion by the second real-world object is transient or non-transient before adjusting the user interface.” Applicant then alleges that “Pandey consistently describes a system that immediately reacts to adjust virtual objects upon detection of a high-priority region…in the visual field without any additional consideration” and that there is “no disclosure in Pandey of determining whether the occlusion…is non-transient, such as by determining that it’s unmoved or lasting longer than a defined period of time.” Applicant then argues that because the “present application provides for circumstances in which the original layout is preserved despite an occlusion of the object on which the first AR content is overlaid” that this allegedly means the “features of claim 1, which requires both detection of an occlusion and a determination that the occlusion is non-transient before generating the second user interface, is not disclosed or suggested by Pandey. The Examiner respectfully disagrees
Applicant’s arguments above rely in each instance, on an unreasonably narrow interpretation of “non-transient” based on improperly importing limitations from the Specification into the claim language, while also not recognizing the extent to which Pandey does teach many of these claim limitations. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “recognizing that the occlusion is temporary” and “describing that when an occlusion persists – “if the occluding object is unmoved and the occlusion lasts longer than a defined period of time (e.g., two seconds), the AR system 400 may categorize the occlusion as non-transient, and may respond by generating the occlusion as non-transient”, “an occlusion may be determined to be non-transient, describing that when an occlusion persists – “if the occluding object is unmoved and the occlusion lasts longer than a defined period of time (e.g., two seconds), the AR system 400 may categorize the occlusion as non-transient, and may respond by generating the occlusion as non-transient”, and “the system evaluates the persistence of an occlusion or whether the occlusion by the second real-world object is transient or non-transient before adjusting the user interface” and “system that immediately reacts to adjust virtual objects upon detection of a high-priority region” in the context of contrasting the alleged invention as implying allegedly not immediately reacting, “determining whether the occlusion…is non-transient, such as by determining that it’s unmoved or lasting longer than a defined period of time”, and “provides for circumstances in which the original layout is preserved despite an occlusion of the object on which the first AR content is overlaid”) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Note that as each of Applicant’s arguments rely on such features which are not claimed, all such arguments are not persuasive. This is because when the claims are given their proper broadest reasonable interpretation, Pandey teaches all of such features as explained fully in the rejections above which properly consider how the triggering of the second UI is responsive to determining an occlusion as well as some determination that the occlusion is non-transient.
Even assuming certain of the limitations from the Specification were required or limitations asserted which are not claimed were required, Pandey does teach evaluating the persistence of an occlusion and whether it is non-transient. Note that the claim does not require any determination of an occlusion as “transient” nor that if something is identified as transient that it cannot trigger a change to a second UI. Furthermore, regardless of whether Pandey “immediately reacts to adjust virtual objects upon detection”, Applicant does not explain how even such an immediate reaction would mean that the occlusion is determined not to be “non-transient”. Rather Pandey does describe specifically determining object persistence as the persistence of the object in view during the duration of the handling/manipulating is a determination that the occlusion is such that the system must respond to it, until the occlusion no longer is there. This is a determination that the occlusion is non-transient to the extent that it requires the system’s response to generate the second UI. Thus not only is occlusion determined, but also a duration of the occlusion is what causes the second UI to be generated, such that this is not a transient occlusion but a sustained and on-going or otherwise non-transient event that the system responds to. In effect, Pandey rather only provides teachings that support that the occlusion is determined to be functionally non-transient, and does not explain what would occur in the event of an occlusion that the system detected as an occlusion but one which is too transient to cause an effect. Thus Applicant arguments in this respect are not persuasive.
Finally, with regard to Applicant’s arguments on page 9, with regard to certain hypothetical scenarios and how Pandey would allegedly hypothetically respond, it is noted again that the scenario described is not recited in the claims and thus such speculation is not persuasive with regard to the claim language. For example note that Applicant’s claim language even as interpreted narrowly by Applicant would not be able to respond properly to Applicant’s scenario either as the claim says nothing about when an occlusion is non-transient enough to cause an adjustment but nevertheless does not cause one. Thus Applicant’s argument is not persuasive. However Applicant is encouraged to actually include such features in the claim language which could deal with “circumstances in which the original layout is preserved despite an occlusion of the object on which the first AR content is overlaid” as this could help to further define the claims and perhaps distinguish from the prior art.
Note that the same reasoning applies to all of the independent claims and no more specific arguments are presented. Thus the claims stand rejected as fully explained above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT E SONNERS whose telephone number is (571)270-7504. The examiner can normally be reached Mon-Friday 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SCOTT E SONNERS/Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
1 US PGPUB No. 20140132629
2 US PGPUB No. 20200242841