DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/05/2025 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 8-13, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nussbaum et al. (US 10832476 B1, hereinafter “Nussbaum”) in view of Wright et al. (US 11417091 B2, hereinafter “Wright”).
Regarding claim 1, Nussbaum teaches A method of using augmented reality to assess damage to an object comprising: (Abstract, “This application discloses methods, systems, and computer implemented virtualization software applications and computer- implemented graphical user interface tools for remote virtual visualization of structures”; col. 23, lines 55-58, “the preceding discussion primarily discusses damage assessment using Virtual Reality, Augmented Reality (AR), and/or or mixed reality by generating models representing sites, areas, structures, or portions thereof”).
… viewing damage to the object in the viewer from a first view; (FIGURE 12: a roof damage on structure 1224; col. 17, lines 58-62, “FIG. 12 illustrates presentation of an exemplary virtual 3D digital model of the structure 1110, by displaying a representation of the virtual 3D digital model 1224 of the structure 1110 within a virtual 3D digital environment 1228 on a user interface unit 1218”). Note that: (1) the damage to the object (roof) is displayed or viewed in the viewer of a user interface unit 1218; and (2) since the presentation in FIGURE 12 is rendered based on the virtual 3D digital model of the object (structure) within a virtual 3D digital environment, a perspective indicating a viewpoint (a first view) is taken as the viewpoint for the image view.
However, Nussbaum fails to disclose, but in the same art of computer graphics, Wright discloses
connecting to an augmented reality system comprising a first user and a second user; (Wright, FIG. 1: an augmented reality system consisting of “CAMERA”, “AR DISPLAY”, “VIRTUAL ANNOTATIONS, “VISUAL TRACKING”, “VIEW SYNTHESIS”, “VR DISPLAY”, etc. and “LOCAL USER” and REMOTE USER”; col. 3, lines 25-36, “FIG. 1 illustrates an example system for a particular example application that enables a remote user to explore a physical environment via live imagery from a camera that the local user holds or wears, such as a camera of a mobile device or a wearable computing device, which may be or include a networked, wearable camera. The remote user is able to interact with a model fused from images captured from the surroundings of the local user and create and add virtual annotations in it or transfer live imagery (e.g., of gestures) back”). Note that: (1) a local user is a first user while a remote user is a second user; and (2) the two users connect to an augmented reality system in FIG. 1 of Wright.
viewing the object through an image capture device; (Wright, FIG.3: “LOCAL USER A” can view a car engine through a camera; col. 3, lines 28-33, “that enables a remote user to explore a physical environment via live imagery from a camera that the local user holds or wears, such as a camera of a mobile device or a wearable computing device, which may be or include a networked, wearable camera”). Note that: (1) a camera of a mobile device or a wearable computing device is an image capture device; and (2) a car engine is an object for viewing.
accepting comments from the second user to the first user; (Wright, col. 22, lines 22-26, “Accordingly, the expert's annotations may be augmented or added to the environment as viewed by the user while allowing the user to freely choose his/her viewpoint to look at the user's environment as well as the annotations”; col. 28, lines 11-15, “The first part allows the user …to receive the expert's instructions and comments and the second part allows the user to replay the expert's instructions and comments to guide him/her through the issue”). Note that: (1) the expert is the second user while the user or local user is the first user; (2) the expert’s comments, instructions, or annotations can be accepted or received for the local user to view; and (3) There are no other specific limitations on how the users communicate to each other in this claim.
determining, based on user input and contextual metadata, if an annotation is desired; (Wright, col. 3, lines 13-21, “At least one of the local user and the remote user may be able to add annotations (e.g., markings, notes, drawings, etc.) to certain objects within the environment captured within the images and/or video. These annotations and the shared images and/or video may improve the communication between the local user and the remote user by, for example, allowing the local user and the remote user to visually identify specific objects in the local user's environment.”; col. 4, lines 21-23, “Annotations may include point-based markers, more complex three-dimensional annotations, drawings, or live imagery, such as hand gestures”). Note that: (1) the hand gesture or its indication description is a contextual metadata that describes the environment data of the annotations; and (2) the local users and remote users can generate annotations based on user adding or inputting and the contextual metadata (hand gesture); and (3) since the annotations may improve the communication between the local user and the remote user, the users can determine they are as useful or desired for applications, resulting in adding them.
in response to determining an annotation is desired:
enabling a collaborative annotation interface that allows simultaneous input from both users; (Wright, col. 3, lines 13-21, “At least one of the local user and the remote user may be able to add annotations (e.g., markings, notes, drawings, etc.) to certain objects within the environment captured within the images and/or video ... may improve the communication between the local user and the remote user by, for example, allowing the local user and the remote user to visually identify specific objects in the local user's environment.”; col. 3/ line 63- col. 4 / line 9 , “the local user may hold or wear a device that integrates a camera and a display system (e.g., hand-held tablet, mobile device, digital eyewear, or other hardware with a camera), which is used to both sense the environment and display visual/spatial feedback from the remote user correctly registered to the real world. In the case of a hand-held device, the handheld device acts as sort of a "magic lens" (i.e., showing the live camera feed and virtual annotations, when the embodiment includes AR). Since a collaboration system typically aids the user an actual task being performed rather than distracts from it, an interface which is simple and easy to comprehend is typically provided such as to facilitate an active user who may be looking at and working in multiple areas”). Note that: (1) an annotation is determined as useful or desired so that the condition is triggered; (2) local user and the remote user work together with a provided interface which is simple and easy to comprehend, and enables the collaboration of the local and remote users to add annotations (e.g., markings, notes, drawings, etc.); and (3) the live collaboration of the local and remote users indicate the simultaneous input from them (adding annotations) are allowed and employed.
storing the annotated image and augmented video in a format configured for asynchronous review by third-party viewers with access controls (Wright, col. 22, lines 28-36, “As the AR session may be recorded, the expert's annotations may be provided for use by various users, and the "sticking" annotations can then be used to overlay or annotate or overlay over different video captured by the devices of those various users. Further, this may allow the various users to be at any perspective or position in their respective environments without losing the effect or message of the expert's annotations”; col. 3, lines 13-21, “At least one of the local user and the remote user may be able to add annotations (e.g., markings, notes, drawings, etc.) to certain objects within the environment captured within the images and/or video ... may improve the communication between the local user and the remote user by, for example, allowing the local user and the remote user to visually identify specific objects in the local user's environment”; col. 21, lines 26-34, “the AR or VR system can be programmed to communicate concurrently while also creating a recording for later review. Accordingly, in some embodiments, an AR or VR session may be generated based on a "live" or current issue being faced by the user and recorded for later review by the user or other users or recorded for later review by the users and other users”; FIG. 1: “OBJECT DATABASE”; FIG. 2: “NO-VOLATILE” memory 208 and “NON-REMOVABLE STORAGE 214”). Note that: (1) the expert’s annotations and annotation overlaid video (augmented video) can be stored when the AR session is recorded; (2) when the AR session including the annotations, captured videos, and overlays is recorded, the recorded contents of the AR session can be stored the corresponding database, memory, or/and storage; (3) it is obvious to one having ordinary skills in the art that the data are stored in a certain designated format (e.g., JPEG, MPEG, stream media formats, multi-media compression, transfer, storage formats) or formation that accommodates the stored AR data, the annotated image and augmented video data for concurrent user review or later user review in asynchronous review mode; (4) other users or additional parties can be allowed at any perspective or position to view the annotation along with augmented or overlaid video. It is obvious to one having ordinary skills in the art that the users have their corresponding access credentials or access control for data security and management; (5) other users or additional parties can be third party viewers reviewing the stored the annotated image and augmented video data; and (6) Examiner notices that there is no definition of “third party viewers” in the specification.
Nussbaum and Wright are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply viewing an object, accepting comments, determining if an annotation is desired, allowing an image to be annotated, and storing the annotation along with augmented video, as taught by Wright into Nussbaum. The motivation would have been “These annotations and the shared images and/or video may improve the communication between the local user and the remote user by, for example, allowing the local user and the remote user to visually identify specific objects in the local user's environment” (Wright, col. 3, lines 17-21). The suggestion for doing so would allow to improve the communication between the local user, the remote user, and additional users. Therefore, it would have been obvious to combine Nussbaum with Wright.
Regarding claim 2, Nussbaum in view of Wright discloses The method of claim 1, further comprising estimate loss based on the augmented video. (Nussbaum, lines 60-63, “The methods and systems described above apply equally to other uses with appropriate modifications, primarily modifications to the types of virtual objects and data sources used to estimate costs associated with the damage”; col. 22, lines 28-36, “As the AR session may be recorded, the expert's annotations may 30 be provided for use by various users, and the "sticking" annotations can then be used to overlay or annotate or overlay over different video captured by the devices of those various users. Further, this may allow the various users to be at any perspective or position in their respective environments without losing the effect or message of the expert's annotations”). Note that: (1) the augmented video images including all images (scene images, the rendered images from 3D object model, video, annotations-overlaid images, etc.) can be recorded and stored in the corresponding memory, database, or storage as cited for claim 1; and (2) the damage loss or cost can be assessed or estimated based on the augmented video images that include virtual objects and data sources.
Regarding claim 3, Nussbaum in view of Wright discloses The method of claim 1, further comprising creating 3-d model of the object. (Nussbaum, FIGURE 10: step 1008, “GENERATE ONE OR MORE VIRTUAL 3D MODELS INCLUDING VIRTUAL REPRESENTATION OF ONE OR MORE STRUCTURES”). Note that: there are no other specific limitations besides “creating 3-d model of the object” in this claim.
Regarding claim 4, Nussbaum in view of Wright discloses The method of claim 1, further comprising … damages ... (Nussbaum, col. 23, lines 60-63, “The methods and systems described above apply equally to other uses with appropriate modifications, primarily modifications to the types of virtual objects and data sources used to estimate costs associated with the damage”).
highlight … to the object (Wright, col. 8, lines 44-48. “the object may be highlighted on a display associated with the electronic device and the user may be able to select the object to bring up a menu of options to interaction with the object within an AR environment”; col. 7, lines 5-6, “this may include highlighting or otherwise visually identifying the recognized object”). Note that: (1) the damage to the object is included in the object and can be highlighted; and (2) the combination of Nussbaum and Wright here highlights the damages to the object.
The motivation to combine Nussbaum and Wright given in claim 1 is incorporated here.
Regarding claim 5, Nussbaum in view of Wright discloses The method of claim 1, further comprising:
determining if additional annotations are desired;
in response to determining an additional annotation is desired:
allowing the image to be further annotated.
(Wright, col. 56, lines 8-14, “the labels may be warning labels, etc., like "be careful, this is hot, and we will tackle that afterwards." The system may keep these annotations and/or objects until the conditions no longer exist (e.g., identified by one or more sensors)”; col. 22, lines 8-10, “In some embodiments, the expert's device may
include visual inputs that allow the expert to draw or type annotations into the video feed being received from the user”). Note that: (1) when conditions exist (e.g., identified by one or more sensors), the labels or annotations can be determined to be desired. Additional annotations from the expert can be desired in addition to the warning labels as current annotations when the conditions persist; (2) the system may keep the annotations in or overlaid with the image, where the image may be a 2D or 3D static image or video as defined in the specification of this application. And further the expert is allowed to draw, enter, or add annotations into the video images; and (3) there no other specific limitations on how the process is performed (e.g., user-driven or user input) in this claim.
The motivation to combine Nussbaum and Wright given in claim 1 is incorporated here.
Regarding claim 8, Nussbaum in view of Wright discloses The method of claim 1, further comprising allowing the view to be changed to a second view. (Wright, col. 22, lines 22-24, “the expert's annotations may be augmented or added to the environment as viewed by the user while allowing the user to freely choose his/her viewpoint to look at the user's environment as well as the annotations”). Note that: (1) the expert is the second user while the user or local user is the first user; and (2) the user is allowed to freely change his/her viewpoint, which is equivalent to allowing the view to be changed to a second view from the first view.
The motivation to combine Nussbaum and Wright given in claim 1 is incorporated here.
Claim 9 reciting “A non-transitory computer readable medium comprising computer executable instructions that physically configure a processor, the computer executable instruction comprising instructions for using augmented reality to assess damage to an object comprising instruction for:”, is corresponding to the method of claim 1. Therefore, claim 9 is rejected for the same rationale for claim 1.
In addition, Nussbaum in view of Wright discloses A non-transitory computer readable medium comprising computer executable instructions that physically configure a processor, the computer executable instruction comprising instructions for using augmented reality to assess damage to an object comprising instruction for: (Nussbaum, col. 30, lines 24-28, “18. A tangible, non-transitory computer-readable medium storing executable instructions for remote three-dimensional(3D) visualization of a location that, when executed by at least one processor of a computer system, cause the computer system to:”).
Claims 10-13 are corresponding to the method of claims 2-5, respectively. Therefore, claims 10-13 are rejected for the same rationale for claims 2-5, respectively.
Claim 16 is corresponding to the method of claim 8. Therefore, claim 16 is rejected for the same rationale for claim 8.
Claim 17 reciting “A computer system comprising: a processor that is physically configured according to computer executable instructions, a memory in communication with the processor; and an input-output circuit in communication with the processor, the computer executable instruction comprising instructions for using augmented reality to assess damage to an object comprising:”, is corresponding to the method of claim 1. Therefore, claim 17 is rejected for the same rationale for claim 1.
In addition, Nussbaum in view of Wright discloses A computer system comprising: a processor that is physically configured according to computer executable instructions, a memory in communication with the processor; and an input-output circuit in communication with the processor, the computer executable instruction comprising instructions for using augmented reality to assess damage to an object comprising: (Nussbaum, col. 29, lines 15-25, “11. A computer system for remote three-dimensional (3D) visualization of a physical structure, comprising: one or more processors; a communication component connected to the one or more processors and configured to send and receive electronic communications via a communication network; and a non-transitory program memory communicatively coupled to the one or more processors and storing executable instructions that, when executed by the one or more processors, cause the computer system to:”; col. 40-42, “ the communication unit 766 may provide input signals to the controller 750 via the I/O circuit 758”; FIGURE 7: “I/O” circuit 758 is in communication with “MICRO-PROCESSOR (MP)” 754).
Claims 18 and 20 are corresponding to the method of claims 2 and 5, respectively. Therefore, claims 18 and 20 are rejected for the same rationale for claims 2 and 5, respectively.
Claim 19 is corresponding to the method claims 3 and 4. Therefore, claim 19 is rejected for the same rationale as claims 3-4.
6. Claims 6-7 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Nussbaum in view of Wright, and further in view of Mentis (US 10169535 B2, hereinafter “Mentis”).
Regarding claim 6, Nussbaum in view of Wright fails to disclose, but in the same art of computer graphics, Mentis discloses annotations are created by moving a hand. (Mentis, col. 9, lines 31-47, “The annotation tool is selected using a voice command or a touchless arm or hand movement to select from the control menu on the screen, aka a gesture command. Once the annotation tool is selected, the annotation tool may be controlled using gesture commands, voice command, or both. Annotations can include using gestures to draw freeform or from a selection of geometric shapes such as lines, arrows, circles, … A text box may also be included in the annotation tool and text may be inputted using voice to text capability, or using sign languages or specialized custom user-developed gestures”). Note that: hand movement is equivalent to moving a hand.
Nussbaum in view of Wright, and Mentis, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply creating annotations my moving a hand or by voice command as taught by Mentis into Nussbaum in view of Wright. The motivation would have been “The video, coupled with the consultation with an implanting surgeon at the time of procurement, permits improved sharing of organ function assessment and facilitates decision-making between recovery and implant surgeons.” (Mentis, col. 12, lines 33-37). The suggestion for doing so would allow to improve the object characteristics assessment and information sharing. Therefore, it would have been obvious to combine Nussbaum, Wright, and Mentis.
Regarding claim 7, the combination of Nussbaum, Wright, and Mentis discloses annotations are created by voice command. (Mentis, col. 9, lines 31-47, “The annotation tool is selected using a voice command … to select from the control menu on the screen, aka a gesture command. Once the annotation tool is selected, the annotation tool may be controlled using gesture commands, voice command, or both. Annotations can include using gestures to draw freeform or from a selection of geometric shapes such as lines, arrows, circles, … A text box may also be included in the annotation tool and text may be inputted using voice to text capability, or using sign languages or specialized custom user-developed gestures”).
The motivation to combine Nussbaum, Wright, and Mentis given in claim 6 is incorporated here.
Claims 14-15 are corresponding to the method of claims 6-7, respectively. Therefore, claims 14-15 are rejected for the same rationale for claims 6-7, respectively.
Response to Arguments
Applicant's arguments with respect to claim rejection 35 U.S.C. 103 filed on 11/06/2025 have been fully considered but they are not persuasive.
Applicant alleges, “Applicant respectfully submits that the amended claims now clearly overcome the cited prior art references - Nussbaum et al. (US 10,832,476 Bl), Wright et al. (US 11,417,091 B2), and Mentis (US 10,169,535 B2)- and place the application in condition for allowance” (page 6, lines 9-11), “The amended claim also specifies that annotations are associated with timestamps and user identifiers, adding a structural data layer that enables traceability and auditability. None of the cited references disclose or suggest this level of annotation provenance.” (page 6, lines 23-25), “Finally, the claim now recites that the annotated image and augmented video are stored in a format configured for asynchronous review by third-party viewers with access controls … This amendment introduces a new use case and system-level feature that is not present in the prior art.” (page 6, lines 26-30), and “Taken together, these amendments introduce non-obvious structural and functional limitations that are not taught or suggested by the cited references, even in combination.” (page 7, lines 1-2). However, Examiner respectfully disagrees about the respective allegations as whole because:
Nussbaum teaches A method of using augmented reality to assess damage to an object comprising: (Abstract, “This application discloses methods, systems, and computer implemented virtualization software applications and computer- implemented graphical user interface tools for remote virtual visualization of structures”; col. 23, lines 55-58, “the preceding discussion primarily discusses damage assessment using Virtual Reality, Augmented Reality (AR), and/or or mixed reality by generating models representing sites, areas, structures, or portions thereof”).
… viewing damage to the object in the viewer from a first view; (FIGURE 12: a roof damage on structure 1224; col. 17, lines 58-62, “FIG. 12 illustrates presentation of an exemplary virtual 3D digital model of the structure 1110, by displaying a representation of the virtual 3D digital model 1224 of the structure 1110 within a virtual 3D digital environment 1228 on a user interface unit 1218”). Note that: (1) the damage to the object (roof) is displayed or viewed in the viewer of a user interface unit 1218; and (2) since the presentation in FIGURE 12 is rendered based on the virtual 3D digital model of the object (structure) within a virtual 3D digital environment, a perspective indicating a viewpoint (a first view) is taken as the viewpoint for the image view.
However, Nussbaum fails to disclose, but in the same art of computer graphics, Wright discloses connecting to an augmented reality system comprising a first user and a second user; (Wright, FIG. 1: an augmented reality system consisting of “CAMERA”, “AR DISPLAY”, “VIRTUAL ANNOTATIONS, “VISUAL TRACKING”, “VIEW SYNTHESIS”, “VR DISPLAY”, etc. and “LOCAL USER” and REMOTE USER”; col. 3, lines 25-36, “FIG. 1 illustrates an example system for a particular example application that enables a remote user to explore a physical environment via live imagery from a camera that the local user holds or wears, such as a camera of a mobile device or a wearable computing device, which may be or include a networked, wearable camera. The remote user is able to interact with a model fused from images captured from the surroundings of the local user and create and add virtual annotations in it or transfer live imagery (e.g., of gestures) back”). Note that: (1) a local user is a first user while a remote user is a second user; and (2) the two users connect to an augmented reality system in FIG. 1 of Wright. viewing the object through an image capture device; (Wright, FIG.3: “LOCAL USER A” can view a car engine through a camera; col. 3, lines 28-33, “that enables a remote user to explore a physical environment via live imagery from a camera that the local user holds or wears, such as a camera of a mobile device or a wearable computing device, which may be or include a networked, wearable camera”). Note that: (1) a camera of a mobile device or a wearable computing device is an image capture device; and (2) a car engine is an object for viewing. accepting comments from the second user to the first user; (Wright, col. 22, lines 22-26, “Accordingly, the expert's annotations may be augmented or added to the environment as viewed by the user while allowing the user to freely choose his/her viewpoint to look at the user's environment as well as the annotations”; col. 28, lines 11-15, “The first part allows the user …to receive the expert's instructions and comments and the second part allows the user to replay the expert's instructions and comments to guide him/her through the issue”). Note that: (1) the expert is the second user while the user or local user is the first user; (2) the expert’s comments, instructions, or annotations can be accepted or received for the local user to view; and (3) There are no other specific limitations on how the users communicate to each other in this claim.
determining, based on user input and contextual metadata, if an annotation is desired; (Wright, col. 3, lines 13-21, “At least one of the local user and the remote user may be able to add annotations (e.g., markings, notes, drawings, etc.) to certain objects within the environment captured within the images and/or video. These annotations and the shared images and/or video may improve the communication between the local user and the remote user by, for example, allowing the local user and the remote user to visually identify specific objects in the local user's environment.”; col. 4, lines 21-23, “Annotations may include point-based markers, more complex three-dimensional annotations, drawings, or live imagery, such as hand gestures”). Note that: (1) the hand gesture or its indication description is a contextual metadata that describes the environment data of the annotations; and (2) the local users and remote users can generate annotations based on user adding or inputting and the contextual metadata (hand gesture); and (3) since the annotations may improve the communication between the local user and the remote user, the users can determine they are as useful or desired for applications, resulting in adding them.
in response to determining an annotation is desired: enabling a collaborative annotation interface that allows simultaneous input from both users; (Wright, col. 3, lines 13-21, “At least one of the local user and the remote user may be able to add annotations (e.g., markings, notes, drawings, etc.) to certain objects within the environment captured within the images and/or video ... may improve the communication between the local user and the remote user by, for example, allowing the local user and the remote user to visually identify specific objects in the local user's environment.”; col. 3/ line 63- col. 4 / line 9 , “the local user may hold or wear a device that integrates a camera and a display system (e.g., hand-held tablet, mobile device, digital eyewear, or other hardware with a camera), which is used to both sense the environment and display visual/spatial feedback from the remote user correctly registered to the real world. In the case of a hand-held device, the handheld device acts as sort of a "magic lens" (i.e., showing the live camera feed and virtual annotations, when the embodiment includes AR). Since a collaboration system typically aids the user an actual task being performed rather than distracts from it, an interface which is simple and easy to comprehend is typically provided such as to facilitate an active user who may be looking at and working in multiple areas”). Note that: (1) an annotation is determined as useful or desired so that the condition is triggered; (2) local user and the remote user work together with a provided interface which is simple and easy to comprehend, and enables the collaboration of the local and remote users to add annotations (e.g., markings, notes, drawings, etc.); and (3) the live collaboration of the local and remote users indicate the simultaneous input from them (adding annotations) are allowed and employed. storing the annotated image and augmented video in a format configured for asynchronous review by third-party viewers with access controls (Wright, col. 22, lines 28-36, “As the AR session may be recorded, the expert's annotations may be provided for use by various users, and the "sticking" annotations can then be used to overlay or annotate or overlay over different video captured by the devices of those various users. Further, this may allow the various users to be at any perspective or position in their respective environments without losing the effect or message of the expert's annotations”; col. 3, lines 13-21, “At least one of the local user and the remote user may be able to add annotations (e.g., markings, notes, drawings, etc.) to certain objects within the environment captured within the images and/or video ... may improve the communication between the local user and the remote user by, for example, allowing the local user and the remote user to visually identify specific objects in the local user's environment”; col. 21, lines 26-34, “the AR or VR system can be programmed to communicate concurrently while also creating a recording for later review. Accordingly, in some embodiments, an AR or VR session may be generated based on a "live" or current issue being faced by the user and recorded for later review by the user or other users or recorded for later review by the users and other users”; FIG. 1: “OBJECT DATABASE”; FIG. 2: “NO-VOLATILE” memory 208 and “NON-REMOVABLE STORAGE 214”). Note that: (1) the expert’s annotations and annotation overlaid video (augmented video) can be stored when the AR session is recorded; (2) when the AR session including the annotations, captured videos, and overlays is recorded, the recorded contents of the AR session can be stored the corresponding database, memory, or/and storage; (3) it is obvious to one having ordinary skills in the art that the data are stored in a certain designated format (e.g., JPEG, MPEG, stream media formats, multi-media compression, transfer, storage formats) or formation that accommodates the stored AR data, the annotated image and augmented video data for concurrent user review or later user review in asynchronous review mode; (4) other users or additional parties can be allowed at any perspective or position to view the annotation along with augmented or overlaid video. It is obvious to one having ordinary skills in the art that the users have their corresponding access credentials or access control for data security and management; (5) other users or additional parties can be third party viewers reviewing the stored the annotated image and augmented video data; and (6) Examiner notices that there is no definition of “third party viewers” in the specification.
Nussbaum and Wright are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply viewing an object, accepting comments, determining if an annotation is desired, allowing an image to be annotated, and storing the annotation along with augmented video, as taught by Wright into Nussbaum. The motivation would have been “These annotations and the shared images and/or video may improve the communication between the local user and the remote user by, for example, allowing the local user and the remote user to visually identify specific objects in the local user's environment” (Wright, col. 3, lines 17-21). The suggestion for doing so would allow to improve the communication between the local user, the remote user, and additional users. Therefore, it would have been obvious to combine Nussbaum with Wright.
Nussbaum in view of Wright discloses all limitations of claim 1.
The particular argument, “annotations are associated with timestamps and user identifiers, adding a structural data layer that enables traceability and auditability” (page 6, lines 23-24), is not recited in the claim(s). Therefore, the argument is mooted.
Independent claims 9 and 17 are corresponding to claim 1. Therefore, claims 9 and 17 are rejected for the same rationale for claim 1.
All dependent claims from independent claims 1, 9, and 17 are rejected for the respective rationales above.
The arguments are not persuasive.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BIAO CHEN whose telephone number is (703)756-1199. The examiner can normally be reached M-F 8am-5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee M Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Biao Chen/
Patent Examiner, Art Unit 2611
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611