Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Remarks – Double Patenting
Although it is the position of the examiner that the present scope of the claims defines itself different and an unobvious variation of parent applications 15/274,260 (now US Patent Number 10,739,157), 16/719,453 (now US Patent Number 11,486,724), 17/960,339 (now US Patent Number 12,259,252) in the event the scope of the claims change over the course of patent prosecution, examiner reserves the right to make Double Patenting rejections in the future.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by Nozoe (US 2011/0295499).
Regarding claims 1, 8 and 15, Nozoe teaches a system comprising:
one or more processors (paragraph 27); and
a computer readable medium including one or more sequences of instructions that, when executed by the one or more processors (paragraph 27), causes the processors to perform operations comprising:
presenting, on a display of a device, at least a portion of a route to a destination location on a map (Figs. 3-4, 7A-8C, 10A-10C and 12A-12C displays a portion of a navigated route on the display);
identifying, by the application, a second location on the map to be displayed on a display of the device concurrently with a current location of the device (paragraph 118 teaches determining a second location, an upcoming point of interest, along the route that is to be displayed concurrent with the current location);
determining, by the application, a first set of one or more edges of a target area of the map that includes at least the second location on the map and the current location of the device (paragraphs 40-44 and 118 and flowchart in Fig. 6, wherein when the device is within a “predetermined distance” to the next point of interest, the system determines an “execution condition” to perform switching from a first screen G11 to a second screen G12 (see Figs). A target display area A in the second screen G12, “is prepared by tilting the plan view map image G11 with respect to the tilt axis L by the tilt angle .theta. to the up side of the display region A”. Therefore, in setting the target display area A, the display area A determines a bounding box defining the portion of the map to be displayed when an upcoming point of interest I is approaching. A bounding box (display area A) defines one or more edges (e.g. top, bottom, left and right boundaries) of a target display area A. Accordingly, the reference discloses determining a first set of one or more edges for each of the transition process map images generated);
determining, by the application, a zoom level that causes the first set of one or more edges to correspond respectively to a second set of one or more edges of a portion of the map that is to be displayed on the display of the device (as discussed above, as a result of setting a bounding box, the level of view (via setting tilt angle theta) is de facto the “zoom level” of the first set of one or more edges that corresponds to the one or more edges of the portion of the map that is to be displayed on the display (with respect to display region A)); and
displaying the portion of the map on the display of the device at the zoom level (Figs. 4, 7B, 7C and 8C shows the map at the “zoom level” that causes the first set of one or more edges to correspond to a second set of one or more edges).
Regarding claims 2, 9 and 16, Nozoe teaches the operations further comprising selecting the second location on the map responsive to determining that the second location is to be framed with the current location of the device within a navigation presentation (paragraphs 40-44 and 118 and flowchart in Fig. 6, wherein when the device is within a “predetermined distance” to the next point of interest, the system determines an “execution condition” to perform switching from a first screen G11 to a second screen G12 (see Figs) thereby “selecting” the second location on the map responsive to determining it should be framed).
Regarding claims 3, 10 and 17, Nozoe teaches the operations further comprising:
defining, by the application, a virtual camera that views the device (paragraphs 40-44 teaches a view of the camera that is different from a top down view, such as a plan view, which meets a viewpoint position, orientation and orthographic projection parameters);
determining, by the application, an orientation of the virtual camera to ensure that a field of view of the camera includes the current location of the device and the second location (paragraphs 40-44 shows the plan view includes both current and second location on the display region R);
determining, by the application, the first set of one or more edges of the target area of the map based on the field of view of the camera (Therefore, in setting the target display area A, the display area A determines a bounding box defining the portion of the map to be displayed when an upcoming point of interest I is approaching. A bounding box (display area A) defines one or more edges (e.g. top, bottom, left and right boundaries) of a target display area A. Accordingly, the reference discloses determining a first set of one or more edges for each of the transition process map images generated); and
presenting, by the application, the current location of the device on the map and the second location from a perspective of the virtual camera (paragraphs 40-44 shows the plan view includes both current and second location on the display region R);.
Regarding claims 4, 11 and 18, Nozoe teaches the claimed wherein: the virtual camera continually views the current location of the device as the device traverses a route (paragraphs 40-44 and 115 teaches the system is set to always show the current location and a predetermined number of route point during the trip); and
the application presents the route on the display of the device from the perspective of the virtual camera (Figs. 4, 7B, 7C and 8C at least shows an example of how the route is displayed in plan view, which as discussed above, meets the parameters for a virtual camera view).
Regarding claims 5, 12 and 19, Nozoe teaches the operations further comprising:
identifying, by the application, a third location on the map to be displayed on the display of the device concurrently with the current location of the device and the second location (paragraph 115 teaches wherein the system is set to always show the current location and a predetermined ordinal number of the route point during the trip. E.g. when set to one, the current location and the next route point are maintained);
wherein the target area is determined such that the target area includes the first location, the second location, and the current location of the device (paragraph 115 teaches wherein the system is set to always show the current location and a predetermined ordinal number of the route point during the trip. E.g. when set to one, the current location and the next route point are maintained. Also see Figs. 10A-10C and 12A-12C wherein multiple upcoming points of interest are concurrently displayed with the current location and the second location as discussed above).
Regarding claims 6, 13 and 20, the system of claim 15, wherein identifying the second location on the map comprises selecting the second location in response to determining that the second location corresponds to a point of interest to be framed (paragraphs 40-44 and 118 and flowchart in Fig. 6, wherein when the device is within a “predetermined distance” to the next point of interest, the system determines an “execution condition” to perform switching from a first screen G11 to a second screen G12 (see Figs) thereby “selecting” the second location on the map responsive to determining it should be framed).
Regarding claims 7 and 14, wherein the first set of one or more edges includes at least two edges (as discussed in claim 1 above, in setting the target display area A, the display area A determines a bounding box defining the portion of the map to be displayed when an upcoming point of interest I is approaching. A bounding box (display area A) defines one or more edges (e.g. top, bottom, left and right boundaries) of a target display area A. Accordingly, the reference discloses determining a first set of one or more edges for each of the transition process map images generated. Therefore, at least two edges are included).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Konig et al. (US 2017/0314945) teaches a system that provides alternative routes between a decision point and a destination.
Tertoolen et al. (US 2016/0252363) provides a fast forward preview of an upcoming decision point by advancing the position of the camera for the 3D perspective view when the current position is closer than a predetermined distance to a decision point.
Kachi et al. (US 2016/0148503) teaches a traffic information guide system, a traffic information guide device, a traffic information guide method, and a computer program that enable easily understandable precise traffic information to be provided to a user even in a case where several traffic information items are concentrated around the same location.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GELEK W TOPGYAL whose telephone number is (571)272-8891. The examiner can normally be reached M-F (9:30-6 PST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GELEK W TOPGYAL/ Primary Examiner, Art Unit 2481