DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This action is responsive to communication(s) filed on 10/29/25.
3. Amendment to claim 8 overcomes the 35 U.S.C 112(b) rejection set forth in the previous office action.
4. After further consideration and discover of new references, the indicated allowability of previous depend claim 7 has been withdrawn. The new rejections are as follows.
5. Claim 1 is objected because “”augument” in line 1 should read as -- augment -- .
Claim Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1-3 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Do (US 2010/0161207) in view of Choi et al. (KR 20200128343, see English Translation) or Yangzhi Yan, hereinafter Yan (CN 112907659, see English Translation) or Llanos Ramirez et al. hereinafter Ramirez (W0 2022/226529).
As per claim 1, Do discloses an AR service platform system comprising:
a relay terminal configured to calculate a location, a direction, or a posture of a vehicle using information collected from a positioning sensor or a camera ([0034] The information collecting terminal (C) may include one or a plurality of cameras, and the camera may capture a fixed time point to generate a still image or video. When the information collecting terminal (C) includes a plurality of cameras, each camera may be installed such that a still image or video captured by each camera include omnidirectional visual information; [0035] The information collecting terminal (C) may include a wireless communication unit. The wireless communication unit may transmit the location information collected by the information collecting terminal (C) to the location-based service server (S) or the mobile terminal 100), extract POI objects included in an image acquired from the camera, and perform control such that additional information for the extracted POI objects is output and matched to the extracted POI objects ([0129] As shown in FIG. 3, the mobile terminal 100 may receive an input selecting a certain point whose geographical information is desired to be received. The input may be received from the user or from a different terminal (S10); [0130] Thereafter, when the mobile terminal 100 receives the input selecting the certain point, the mobile terminal 100 requests a transmission of location information from the information collecting terminal (C) installed at a position related to the certain point or from a server or from the memory 170, and receives the location information including the omnidirectional visual information with respect to the certain point. [0131] In more detail, when the mobile terminal 100 receives the input selecting the certain point, the mobile terminal may obtain the visual information corresponding to the certain point (S20)); and
an AR main server connected to the relay terminal and configured to extract and provide additional information for POI objects included in information collected from the camera to the relay terminal ([0132] In step S20 of receiving the location information, the mobile terminal 100 may determine whether or not the information collecting terminal (C) exists at a position available for communication with the mobile terminal 100 (S21). If the information collecting terminal (C) exists at a position available for communication with the mobile terminal 100, the mobile terminal 100 may receive the location information transmitted from the information collecting terminal (C) (S22). If the information collecting terminal (C) does not exist at a position available for communication with the mobile terminal 100, the mobile terminal 100 may request transmission of location information including visual information related to the certain point stored in the server from the server and receive location information transmitted from the server (S23). The mobile terminal 100 may also retrieve previously received visual information from the memory 170.)
While Do does not teach wherein the AR main server stores feature points in a three dimensional (3D) space in a point cloud library (PCL) through location relationships of feature points moved on the basis of the extracted feature points Choi, Yan or Ramirez teaches such as described below.
Choice teaches an augment reality (AR) service platform system wherein main server stores features points in a 3D space in a point cloud library (3D server 200) through location relationship of feature points moved on the basis of the extracted feature points (page 2, “estimating a user's current location based on GPS coordinates of a user terminal and coordinates of feature points captured by the user termina”, Page 3 ”The present invention determines the terminal coordinates based on the coordinates of the reference feature points included in the tile data corresponding to the GPS coordinates of the user terminal and the position change of the reference feature point in the image captured by the camera of the user terminal, thereby determining the current location of the user”, Page 7, “The server 200 may include a 3D map 10 composed of a plurality of unit areas 10 ′ and a database 220 in which tile data corresponding to each of the plurality of unit areas 10 ′ is stored in advance. More specifically, information on the 3D map 10 and reference feature points (interest points) included in the 3D map 10 may be stored in advance in the database 220. In this case, the 3D map 10 may be composed of a unit region 10 ′, and information on a reference feature point corresponding to each unit region 10 ′ may be defined as tile data”, page 9 “The feature point extraction module 140 may identify any one reference feature point 11 matching a feature point extracted from an external image among a plurality of reference feature points 11 included in the tile data received from the server 200 ( S150)”, page 10 “estimating the location of the user by using the feature points around the user, the user can call the vehicle 300 to the correct location of the current user”, page 11, “More specifically, the server 200 may receive GPS coordinates from the vehicle GPS module 330 and transmit tile data corresponding to the received GPS coordinates to the transportation vehicle 300. Subsequently, the transport vehicle 300 may photograph an external image using the vehicle camera module 340 and extract feature points from the photographed external image. Subsequently, the transport vehicle 300 may identify any one reference feature point 11 matching the feature point extracted from the external image among the plurality of reference feature points 11 included in the tile data, and the identified reference feature point 11 The vehicle coordinate may be determined based on the coordinate of) and the position change of the reference feature point 11 in the external image”, etc.)
OR Yan, similarly teaches a positioning system wherein main server stores features points in a 3D space in a point cloud library through location relationship of feature points moved on the basis of the extracted feature points (page 4, “acquiring road environment point cloud data; determining sparse point cloud data of a road characteristic local plane according to the road environment point cloud data; sending a storage request aiming at the sparse point cloud data to a server” and “sending the sparse point cloud data to a server so as to be convenient for the server to store the sparse point cloud data”, “ the server is used for receiving the storage request and storing the sparse point cloud data into a road characteristic data set … determining second attitude data of the current frame of the second mobile equipment according to the road feature point cloud data of the current frame and the road feature data set”, page 9 “enables the 3D point cloud based on the elements of the local plane features in the road to construct a road feature data set, and the 3D point cloud is the end point of the local plane in the road feature, so the 3D point cloud has sparsity, a compact road feature map is realized’, page 17, “the road segment includes data of 100000 road feature points, which include the down-sampled external connection end points of the road feature local plane. It follows that this feature point data amount is much smaller than that of the road feature map constructed”, Page 18, “After the three-dimensional point cloud data of the current frame is determined, the next step can be carried out, and the pose data of the second mobile equipment is determined according to the road characteristic data and the road characteristic data set of the current frame. And 2.4, determining second attitude data of the current frame of the second mobile equipment according to the road feature point cloud data of the current frame and the road feature data set. In this step, the road feature point cloud data (observation information) of the current frame may be compared with the road feature data set in the road feature database, and the pose data of the second mobile device may be determined according to the matched feature points. This step may use a general algorithm for position estimation based on road feature maps, such as a particle filter, etc. In specific implementation, the specific method for matching the observation information with the feature point database may be as follows: the method comprises the steps of taking the previous pose corresponding to the previous frame of a current frame as an initial pose, taking the increment of the motion of an imu sensor in the time period (the time difference between the current frame and the previous frame) as a motion model, taking sparse point cloud data (road characteristic data) of the current frame obtained through real-time extraction as an observation model, inputting 3D characteristics (the observation model) from a monocular camera and a local 3D characteristic map from a road characteristic data set into a Bayes filter, and performing optimization estimation on the current pose by using a Bayes filtering method. In this embodiment, when the second mobile device starts the automatic driving, a preliminary pose of the current camera may be provided through a GNSS (Global Navigation Satellite System), where an RTK (Real Time Kinematic carrier phase difference technique) may be adopted to provide a preliminary pose with higher accuracy, that is, the first pose data of the device starting position. The process can be operated only once in the system starting stage, and can also be triggered regularly in the system operation process to provide multi-element fusion correction of the real-time pose”, page 20, “stored in the server; in this case, the second mobile device may further perform the steps of: 1) sending a mobile equipment positioning request aiming at the two-dimensional image data to a server, wherein the positioning request comprises the first position and attitude data, so that the server can determine three-dimensional point cloud data corresponding to the two-dimensional road characteristic data as the three-dimensional point cloud data of the current frame according to the first position and attitude data and a road characteristic data set; determining second attitude data of the current frame of the mobile equipment according to the three-dimensional point cloud data of the current frame and the road characteristic data set; 2) and receiving the second posture data returned by the server, page 21 “sending the sparse point cloud data to a server, so that the server constructs a road characteristic data set, namely a road characteristic map, on the basis of the sparse point cloud data; by the processing mode, a road characteristic data set is constructed based on the 3D point cloud of the elements”, and page 23,
“The target road may be a specific road section designated by a user, such as a road to be traveled from location 1 (e.g., north gate of a clique) to location 2 (e.g., core campus) in a clique area with a large occupied area (1 ten thousand mu). Also Figs. 3, 4, 7, 14, and 20).
Or Ramirez similarly teaches a positioning system wherein main server stores features points in a 3D space in a point cloud library through location relationship of feature points moved on the basis of the extracted feature points (Paragraph 0026, “The example road sign 14 includes a machine-readable optic label 16 that contains information regarding the location of the road sign 14”, paragraph 0053, “he disclosed system enables camera and computer vision system to derive a precise position by viewing a sign and determining an offset from the sign. [0054] In the above example embodiments, the vehicle 10 uses its computer vision-based algorithm to detect, recognize and interpret traffic signs, such as the road sign 14 having GPS coordinates registered in a database. The vehicle 10 uses the road/traffic sign 14 as a fixed reference and correct its localization by computing its distance to the traffic sign 14”, paragraph 0058 “the vehicle positioning system 15 of each neighboring vehicle 10 in the vehicle group includes a map generation block 70 which builds or generates a map. The map generation block 70 of each vehicle 10 identifies feature points, i.e., pixels extracted from images captured by the camera 12 of the corresponding vehicle 10 which are associated with, for example, a corner of an object appearing in the images. Using the identified feature points, the map generation block 70 of each vehicle 10 constructs a three dimensional (3D) map of the area of interest (e.g., a tunnel) via observations of local surroundings and landmarks”, paragraphs 0061 and 0062, “the master entity 74 is a server which is located remotely from each vehicle 15 and is able to wirelessly communicate with each vehicle 10 in the vehicle group. In the instance in which the area of interest pertains to a tunnel, the static server may be located in the tunnel. FIG. 8 illustrates vehicles 10 traveling along a roadway R in a GPS-denied environment, such as in a tunnel, with the master entity/server 74 positioned therein. [0062] In an alternative example embodiment, instead of each vehicle 10 sending the information to a master entity/server, the vehicles 10 interact only with the neighboring vehicles 10 in a vehicle group. By sharing corrected local/global maps perceived by each vehicle 10, the vehicle group is able to construct, via a consensus algorithm, a global map which is the result of the fusion of all individual maps. This alternative example embodiments works when a master entity/server is not available to fuse together collected map information from the vehicles 10. This approach is more robust to failures, vulnerabilities and cyber-attacks because a group of vehicles participate in the fusion and/or global map creation instead of a single entity. When the fused map is created, the map may be transmitted to and stored in a server or other device that is remote from the vehicles 10 and at least in proximity with the area of interest, for access by other vehicles within a communication range in the area of interest”, paragraphs 0063-0065, “The second function block of the vehicle positioning system 15 of each vehicle 10 is a map reuse block 72 which, as discussed above, may be used by a vehicle 10 for localization. The shared, previously generated map is a 3D point cloud and/or 3D point cloud map, where each point in the map is referenced to a global coordinate system and each point has a feature descriptor (e.g., SIFT, SURF, ORB, HOG, etc.). For each feature point, the map reuse block 72 computes a feature descriptor, which is a unique identifier constructed from the feature neighborhood. The map reuse block 72 of each vehicle 10 has the ability to decide whether to use the shared map for localizing the vehicle 10 using the information of the descriptors that each point possesses. [0064] A description of the operation of the vehicle positioning system 15 is described in Fig. 6 according to an example embodiment. The camera 12 of a vehicle 10 captures images of the vehicle's environment at 602. Some of the captured images includes representations of a landmark (e.g., a road sign 14 as discussed above). The controller 25 may detect in the captured images a representation of the landmark, and decode map information associated with the landmark including the scale at 604. At 606, the controller 25 generates a local 3D map and, based upon the point cloud map provided by the landmark, the local 3D map is corrected and/or transformed to include global coordinates. [0065] In the event a server or other computational device is available as the master entity 74 that is located remote from the vehicle group, vehicle 10 sends its corrected local 3D map to the server at 608, and the other vehicles in the vehicle group do the same. The vehicle 10 may also report to the server the covariance of its observations. The server fuses the collected local maps at 610 and fuses the collected maps to create a global map. The server may determine that the global map is available for use following the global map meeting a predetermined criteria, such as having a relatively low covariance”, paragraph 0068 “each landmark includes information in a database that is accessible to a vehicle 12 which observes the landmark, the information including GPS position of the landmark which does not change over time, and a point cloud of the landmark in which every point cloud point has a visual descriptor (SIFT, SURF, ORB, HOG, etc.) with the point cloud being constructed using world or global scale/coordinates. [0069] One or more vehicles 10 performs the global map build until the global map is available”).
As Do and Choi or Yan or Ramirez are from the analogous art, it would have been obvious to an artisan before the effective filing date of the instant application to incorporate the server at taught by Choi or Yan or Ramirez to store feature points to assist in identifying positions/ locations.
As per claim 2, Do demonstrated all the elements as disclosed in claim 1, and further discloses wherein the relay terminal receives additional information for POI objects, which are included in an image acquired from the camera, from the AR main server before extracting POI objects from the camera ([0132] where “the mobile terminal 100 may request transmission of location information including visual information related to the certain point stored in the server from the server and receive location information transmitted from the server (S23). The mobile terminal 100 may also retrieve previously received visual information from the memory 170”.)
As per claim 3, Do demonstrated all the elements as disclosed in claim 1, and further discloses wherein the AR service platform system classifies any one of a road, a sidewalk, a crosswalk, a sign, a person, and a vehicle from an image that is received ([0163] As shown in FIG. 7c, the still image or video of a different time point from that of the still image or video displayed on FIG. 7a may include a forest (F), a lake (L), and the like, that the still image or video displayed on FIG. 7a does not have).
As per claim 9, Do as modified by Choi or Yan or Ramirez teaches the relay terminal acquires a location or posture of the vehicle by comparing feature points of objects on driving path acquired from an image taken by the camera with stored feature points of objects (see the citations from Choi or Yan or Ramirez reference as explained above).
7. Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Do (US 2010/0161207) in view of Choi et al. (KR 20200128343, see English Translation) or Yangzhi Yan, hereinafter Yan (CN 112907659, see English Translation) or Llanos Ramirez et al. hereinafter Ramirez (W0 2022/226529) as applied to claims 1-3 and 9 above, and further in view of CEng et al. (CN 107784693).
As per claim 4, the combination of Do and Choi or Yan or Ramirez does not explicitly teach wherein the relay terminal converts a field of view of the camera and a real world camera image coordinates into camera coordinates in a 3D space to provide an AR image. However, this is known in the art as taught by CEng. CEng discloses an information processing method in which “to convert the coordinate of the second position from the image pixel coordinate system to image the physical coordinate system, and then transferred to the camera coordinate system to obtain the position of the object in the real world coordinate system. (page 6, line 4-7).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of CEng into the combination Do and Choi or Yan or Ramirez because Do discloses a method of providing location based information and CEng further discloses the view can be from the camera for the purpose of improving user experience.
As per claim 5, the combination demonstrated all the elements as disclosed in claim 4, and CEng wherein the AR service platform system combines and outputs 3D and media data in accordance with camera coordinates that change in accordance with an angle of view of the camera and a location of an information output terminal configured to output additional information ([Abstract] “through obtaining the three-dimensional space coordinate system of the real world, obtaining observation point in the first position of three-dimensional space coordinate system of the real world; obtaining the object in the second position of three-dimensional space coordinate system of the real world, the third position according to the first position and the second position to obtain the object on the display device according to the third location of the display device on the fourth obtaining the object position on the projection device. solves the display the same virtual information with real information when in the prior art will with a dislocation and so on, reducing the sense of reality of the virtual information”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of CEng into the combination of Do and Choi or Yan or Ramirez because Do discloses a method of providing location based information and CEng further discloses the view can be adjusted to from the camera for the purpose of improving user experience.
8. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Do (US 2010/0161207) in view of Choi et al. (KR 20200128343, see English Translation) or Yangzhi Yan, hereinafter Yan (CN 112907659, see English Translation) or Llanos Ramirez et al. hereinafter Ramirez (W0 2022/226529) as applied to claims 1-3 and 9 above, and further in view of Naoki (JP 2020004121).
As per claim 6, the combination of Do and Choi or Yan or Ramirez does not explicitly teach wherein the AR main server extracts and stores feature points on the basis of a panorama image taken by the camera. However, this is known in the art as taught by Naoki. Naoki discloses a method of acquiring feature points in the panoramic images ([Abstract] “An information processor acquires a plurality of panoramic images photographed at shooting positions in a space. Further, the information processor acquires feature points in the plurality of panoramic images”; page 3, line 8-9).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Naoki into the combination of Do and Choi or Yan or Ramirez because Do discloses a method of providing location based information and Naoki further discloses the feature point of the image could be extracted for the purpose of improving viewing experience.
9. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Do (US 2010/0161207) in view of Choi et al. (KR 20200128343, see English Translation) or Yangzhi Yan, hereinafter Yan (CN 112907659, see English Translation) or Llanos Ramirez et al. hereinafter Ramirez (W0 2022/226529) as applied to claims 1-3 above, and further in view of Masayuki et al. (JP 2014183461).
As per claim 8, the combination of Do and Choi or Yan or Ramirez does not explicitly teach wherein the relay terminal acquires a location of the vehicle from a GPS or RTK and acquires a direction of the vehicle from an acceleration sensor or a gyro sensor. However, this is known in the as taught by Masayuki et al., hereinafter Masayuki. Masayuki discloses a method of determining location and direction of a unit using GPS and gyroscope (page 4, line 17-19).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Masayuki into the combination of Do and Choi or Yan or Ramirez because Do discloses a method of providing location based information and Masayuki further discloses the location and direction of a unit could be obtained for the purpose of improving viewing experience.
Conclusion
10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON CHAN whose telephone number is (571)272-3022. The examiner can normally be reached 8:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alford Kindred can be reached at 571-272-4037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASON CHAN/Supervisory Patent Examiner, Art Unit 2619