DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is in response to Applicant’s amendment/response filed on 28 August 2025, which has been entered and made of record.
Response to Arguments
Applicant's arguments filed 28 August 2025 have been fully considered but they are not persuasive.
Applicant argues “Cowburn discloses a method where a second device captures an image of a marker displayed on the first device and determines a transformation using that image of the marker,” while the current invention measures a space “without the need to install or display a dedicated marker (like the QR code of Cowburn)” (Remarks, pgs. 10-11). As a matter of clarification, the Examiner points out that Cowburn does not require using QR codes or dedicated markers. Cowburn discloses “The marker can be … a natural feature marker” (para. 62). Using a natural feature marker means using the physical features of a real-world object to locate the object, as opposed to a QR code or other dedicated marker.
Applicant argues “Cowburn fails to recite … measuring the position and orientation with the other terminal by using a ranging sensor, while a guide image for matching the other terminal is displayed” (Remarks, pg. 11). It is true that Cowburn fails to recite the claimed ranging sensor or guide image. However, the Office Action does not rely on Cowburn to teach the claimed ranging sensor or guide image, and therefore the argument is moot. The rejection below lays out secondary prior-art references and motivations to modify Cowburn using the features from those references. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Applicant argues “While Ishige discloses a detector for calculating a distance between two information apparatuses, it only describes measuring the distance. Ishige fails to teach or suggest any relation with a marker, as described in Cowburn, nor does it suggest measuring the position and orientation with another apparatus while a guide image is displayed” (Remarks, pg. 11). Similar to the previous argument, the Office Action does not rely on Ishige to teach a marker (which is not currently claimed) or a guide image, and therefore the argument is moot. The claim requires “measure a relationship relating to a position and an orientation with the second information terminal by the ranging sensor,” and the ranging sensor of Ishige is shown in the citations below to teach measuring a relationship relating to a position and orientation with a second information terminal.
Applicant argues “Dagley does not mention measuring the position and orientation with another terminal while a guide image for matching that other terminal is displayed. The context and purpose are fundamentally different” (Remarks, pg. 11). The Examiner respectfully disagrees. In each of the current invention, Cowburn, Ishige, and Dagley, a user aims an information processing apparatus at a physical object. In each of the current invention, Cowburn, and Ishige, the physical object is a second information processing apparatus; in Dagley the physical object can be any physical object. Dagley teaches that, when aiming an information processing apparatus at a physical object, it is convenient to display a guide image (e.g. something similar to crosshairs or another aiming reference). For example, Dagley recites “pointing a smartphone camera at the target object until the target object aligns with the guide marker” (para. 13). When the guide image aiming technique of Dagley is applied to Cowburn, the combination would render obvious using a guide image to assist a user of Cowburn in aiming at the second information processing apparatus of Cowburn in order to assist with aiming.
Any remaining arguments are considered moot based on the foregoing.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 28, 30, 31, and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Cowburn et al. (US 2021/0201530; hereinafter “Cowburn”) in view of Ishige et al. (US 2012/0314936; hereinafter “Ishige”), and further in view of Dagley et al. (US 2019/0197783; hereinafter “Dagley”).
Regarding claim 28, Cowburn discloses A space recognition system comprising: a first information terminal (“second device … e.g., client devices 102 [of Fig. 1],” para. 60; NOTE: the claimed “first information terminal” maps to the “second device” of Cowburn and the claimed “second information terminal” maps to the “first device” of Cowburn) comprising a camera (“a camera component of a client device,” para. 45), and a display (“client devices' display screens,” para. 13), the first information terminal having a first coordinate system (“a coordinate frame tracked by the second device,” para. 61); and a second information terminal (“first device … e.g., client devices 102 [of Fig. 1],” para. 60) configured to perform processing based on a second coordinate system (“a coordinate frame tracked by the first device,” para. 61) which is different from the first coordinate system (“The origin of the coordinate frame tracked by the first device can be different from the origin of the coordinate frame tracked by the second device,” para. 61), wherein the first information terminal is configured to: when sharing recognition of a space with the second information terminal, recognize the second information terminal based on an image captured by the camera (“the first device displays a marker … the marker can include the session identifier that is associated with the shared AR session,” para. 62; “the second device detects the marker using a camera,” para. 64), measure a relationship relating to a position and an orientation with the second information terminal, and match the first coordinate system with the second coordinate system based on data representing the relationship measured to share the recognition of the space (“the second device determines a transformation (TC) between the first device and the second device using the image of the marker,” para. 67).
Cowburn does not disclose a ranging sensor or measuring the relationship by the ranging sensor.
In the same art of determining a relationship between client devices, Ishige teaches the use of a ranging sensor and measuring a relationship with a second information terminal by the ranging sensor (“The information processing apparatus may further include a detector configured to calculate a distance between the information processing apparatus and the another information processing apparatus,” para. 6; “shares the AR space with a different device, so that the AR object disposed in the AR space can be shared,” para. 45; “obtain the posture information of the different device,” para. 61).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Ishige to Cowburn. The motivation would have been for “allowing easy sharing” of the virtual space (Ishige, para. 17).
The combination of Cowburn and Ishige does not disclose display a guide image for matching the second information terminal on the display superimposed on the image captured by the camera or measuring the relationship while displaying the guide image for matching the second information terminal on the display superimposed on the image captured by the camera.
In the same art of mixed reality, Dagley teaches display a guide image for matching the [target object] on the display superimposed on the image captured by the camera and performing the measuring while displaying the guide image for matching the [target object] on the display superimposed on the image captured by the camera (“display a guide marker (e.g., a guideline, a bounding shape, and/or the like) on an image, such that a user can align a target object with the guide marker (e.g., by pointing a smartphone camera at the target object until the target object aligns with the guide marker),” para. 13; “display the guide marker on a display of the smartphone, superimposed on an image being captured by a camera,” para. 17; “project the guide marker and/or the intersection point into 3D space (e.g., of a real world environment that is being captured by the user device, that includes the target object, etc.) to identify an intersection point in the real world,” para. 20).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Dagley to the combination of Cowburn and Ishige. The motivation would have been to “provide for effective and accurate” measurements (Dagley, para. 14). Cowburn discloses aiming a first information terminal at a second information terminal, which is a physical object. Dagley teaches, when aiming a first information terminal at a physical object, overlaying a guide image on a display of the first information terminal to assist aiming at the physical object. When Cowburn is modified to include the guide image of Dagley, the combination (i.e. a modified version of Cowburn) would teach overlaying the guide image onto the display of Cowburn while aiming at the physical object, which in the case of Cowburn is the second information terminal.
Regarding claim 30, the combination of Cowburn, Ishige, and Dagley renders obvious wherein, when the first information terminal and the second information terminal share and measure the space: the first information terminal measures a first area of the space by the first coordinate system and creates first partial space data described in the first coordinate system, the second information terminal measures a second area of the space by the second coordinate system and creates second partial space data described in the second coordinate system, and the first information terminal or the second information terminal converts the second partial space data into partial space data described in the first coordinate system and integrates the first partial space data and the partial space data described in the first terminal coordinate system to generate a space data in a space unit (“during initialization of the shared AR session, the first device and the second device can be in active Simultaneous Localization And Mapping (SLAM) sessions that are independent of each other, and these SLAM session maps need to be aligned with one another to establish the shared AR session,” Cowburn, para. 60; “a transformation (TC) between the first device and the second device,” Cowburn, para. 67).
Regarding claim 31, the combination of Cowburn, Ishige, and Dagley renders obvious wherein the second information terminal is configured to: generate a conversion parameter for matching the first coordinate system and the second coordinate system, and set the conversion parameter in the second information terminal itself as an own device (“the first device can determine the common coordinate frame using the transformation (TC) received from the second device,” Cowburn, para. 72).
Regarding claim 38, it is rejected using the same citations and rationales described in the rejection of claim 28.
Claims 34, 36, 37, 44-46, and 48 are rejected under 35 U.S.C. 103 as being unpatentable over Cowburn in view of Ishige, and further in view of Dagley, and further in view of Dedonato et al. (US 2021/0150818; hereinafter “Dedonato”).
Regarding claim 34, Cowburn discloses A space recognition system comprising: a first information terminal (“second device … e.g., client devices 102 [of Fig. 1],” para. 60; NOTE: the claimed “first information terminal” maps to the “second device” of Cowburn and the claimed “second information terminal” maps to the “first device” of Cowburn) comprising a space sensor configured to measure a space, a camera (“a camera component of a client device,” para. 45), and a display (“client devices' display screens,” para. 13), the first information terminal having a first coordinate system (“a coordinate frame tracked by the second device,” para. 61); and a second information terminal (“first device … e.g., client devices 102 [of Fig. 1],” para. 60) configured to perform processing based on a second coordinate system (“a coordinate frame tracked by the first device,” para. 61) which is different from the first coordinate system (“The origin of the coordinate frame tracked by the first device can be different from the origin of the coordinate frame tracked by the second device,” para. 61), wherein the first information terminal is configured to: when sharing recognition of the space with the second information terminal, recognize the second information terminal based on an image captured by the camera (“the first device displays a marker … the marker can include the session identifier that is associated with the shared AR session,” para. 62; “the second device detects the marker using a camera,” para. 64), measure a position and an orientation of the second information terminal, calculate a conversion parameter for converting the second coordinate system to the first coordinate system based on the position and the orientation measured (“the second device determines a transformation (TC) between the first device and the second device using the image of the marker,” para. 67), measure a first area of the space by the space sensor to generate a first partial space data described in the first coordinate system, receive a partial space data relating to a second area of the space from the second information terminal, convert the received partial space data described in the second coordinate system to a second partial space data described in the first coordinate system, integrate the first partial space data and the second partial space data (“during initialization of the shared AR session, the first device and the second device can be in active Simultaneous Localization And Mapping (SLAM) sessions that are independent of each other, and these SLAM session maps need to be aligned with one another to establish the shared AR session,” para. 60; “a transformation (TC) between the first device and the second device,” para. 67).
Cowburn does not disclose a ranging sensor or measuring the relationship by the ranging sensor.
In the same art of determining a relationship between client devices, Ishige teaches the use of a ranging sensor and measuring a relationship with a second information terminal by the ranging sensor (“The information processing apparatus may further include a detector configured to calculate a distance between the information processing apparatus and the another information processing apparatus,” para. 6; “shares the AR space with a different device, so that the AR object disposed in the AR space can be shared,” para. 45; “obtain the posture information of the different device by analyzing the image of the different device,” para. 61).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Ishige to Cowburn. The motivation would have been for “allowing easy sharing” of the virtual space (Ishige, para. 17).
The combination of Cowburn and Ishige does not disclose display a guide image for matching the second information terminal on the display superimposed on the image captured by the camera or measuring the relationship while displaying the guide image for matching the second information terminal on the display superimposed on the image captured by the camera.
In the same art of mixed reality, Dagley teaches display a guide image for matching the [target object] on the display superimposed on the image captured by the camera and performing the measuring while displaying the guide image for matching the [target object] on the display superimposed on the image captured by the camera (“display a guide marker (e.g., a guideline, a bounding shape, and/or the like) on an image, such that a user can align a target object with the guide marker (e.g., by pointing a smartphone camera at the target object until the target object aligns with the guide marker),” para. 13; “display the guide marker on a display of the smartphone, superimposed on an image being captured by a camera,” para. 17; “project the guide marker and/or the intersection point into 3D space (e.g., of a real world environment that is being captured by the user device, that includes the target object, etc.) to identify an intersection point in the real world,” para. 20).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Dagley to the combination of Cowburn and Ishige. The motivation would have been to “provide for effective and accurate” measurements (Dagley, para. 14). Cowburn discloses aiming a first information terminal at a second information terminal, which is a physical object. Dagley teaches, when aiming a first information terminal at a physical object, overlaying a guide image on a display of the first information terminal to assist aiming at the physical object. When Cowburn is modified to include the guide image of Dagley, the combination (i.e. a modified version of Cowburn) would teach overlaying the guide image onto the display of Cowburn while aiming at the physical object, which in the case of Cowburn is the second information terminal.
The combination of Cowburn, Ishige, and Dagley does not disclose terminate the measurement by the space sensor when determining that the first area and the second area cover a predetermined percentage or higher of the space.
In the same art of environment mapping, Dedonato teaches terminate the measurement by the space sensor when determining that the [scanned] area cover a predetermined percentage or higher of the space (“the AR system may identify whether a map quality is sufficient to stop scanning or if scanning should continue,” para. 314; “guide the user to observe the 3D environment of the user; collect data associated with the 3D environment of the user; determine a map quality index associated with the map; … stop guiding the user in response to identifying a stopping condition comprising a user input to stop or a determination that the map quality index passes a threshold … the map quality index is based on a percentage of the 3D environment that has associated collected data,” paras. 425-431).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Dedonato to the first area and the second area of the combination of Cowburn, Ishige, and Dagley. The motivation would have been for “a more accurate version of the world model” (Dedonato, para. 102).
Regarding claim 36, the combination of Cowburn, Ishige, Dagley, and Dedonato renders obvious wherein the first information terminal is configured to display an image representing a measured area on the display superimposed on the image captured by the camera when sharing the recognition of the space with the second information terminal (e.g. Fig. 34B of Dedonato shows an indication 3214 representing that the area has been scanned; see claim 34 for motivation to combine).
Regarding claim 37, the combination of Cowburn, Ishige, Dagley, and Dedonato renders obvious wherein the first information terminal is configured to display an image representing an unmeasured area on the display superimposed on the image captured by the camera when sharing the recognition of the space with the second information terminal (e.g. Fig. 34A of Dedonato shows an indication 3210 representing that the area has not yet been scanned; see claim 34 for motivation to combine).
Regarding claims 44, 45, and 46, they are rejected using the same citations and rationales described in the rejections of claims 34, 36, and 37, respectively.
Regarding claim 48, the combination of Cowburn, Ishige, Dagley, and Dedonato renders obvious display an image representing a measured area at a position corresponding to the first area and the second area of the image captured by the camera (e.g. Fig. 34B of Dedonato shows an indication 3214 representing that the area has been scanned; see claim 34 for motivation to combine).
Claims 47 and 49 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Cowburn, Ishige, Dagley, and Dedonato, and further in view of Oi et al. (US 2011/0224902; hereinafter “Oi”).
Regarding claim 47, it is rejected using the same citations and rationales described in the rejection of claim 34, with the additional limitations of the first partial space data including a first time stamp indicating a date and time when the first area was measured, in response to receiving a partial space data relating to the first area from the second information terminal, compare a time stamp included in the received partial space data relating to the first area with the first time stamp, in response to determining that the time stamp included in the received partial space data relating to the first area is newer than the first time stamp, convert the received partial space data described in the second coordinate system to data described in the first coordinate system and update the first partial space data using the converted data, which are not taught by the combination of Cowburn, Ishige, Dagley, and Dedonato.
In the same art of environment mapping, Oi teaches the first partial space data including a first time stamp indicating a date and time when the first area was measured (“include position data of each object in the real space in the coordinate system of the global map and a time stamp related to the position data,” para. 16), in response to receiving a partial space data relating to the first area from the second information terminal, compare a time stamp included in the received partial space data relating to the first area with the first time stamp (“object B2 has moved during a period from the latest update time of the partial global map … to the generation time of the local map,” para. 149), in response to determining that the time stamp included in the received partial space data relating to the first area is newer than the first time stamp, convert the received partial space data described in the second coordinate system to data described in the first coordinate system and update the first partial space data using the converted data (“The partial global map MG (Ua) before update includes position data of nine objects (Obj1 to Obj9). The time stamp of those position data indicates 2010/1/1 23:59:59 as an example. The local map MLa' after coordinate conversion includes position data of five objects (Obj1 to Obj4). The time stamp of those position data indicates 2010/1/2 12:00:00 as an example. In this case, the updating unit updates the position data of the partial global map MG (Ua) to the position data of the more recent local map MLa' after coordinate conversion for each of the common objects,” para. 161).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Oi to the combination of Cowburn, Ishige, Dagley, and Dedonato. The motivation would have been to “enable a change in position of a physical object in the real space to be quickly shared among users” (Oi, para. 6).
Regarding claim 49, the combination of Cowburn, Ishige, Dagley, Dedonato, and Oi renders obvious display an image representing a measured area at a position corresponding to the first area and the second area of the image captured by the camera (e.g. Fig. 34B of Dedonato shows an indication 3214 representing that the area has been scanned; see claim 47 for motivation to combine).
Conclusion
Applicant's amendment necessitated any new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ryan McCulley whose telephone number is (571)270-3754. The examiner can normally be reached Monday through Friday, 8:00am - 4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN MCCULLEY/Primary Examiner, Art Unit 2611