DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/26/2025 has been entered. Claims 1-3, 6-11 and 14-15 remain pending in the application and claims 4-5 and 12-13 are cancelled.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpret under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “driving information unit” in claims 1-3, 6-7, 9, 11, and 14; “sensing unit” in claims 1, 3, 6, 8-9 and 14-15 and “determiner” in claims 1, 3, 7-9, 11 and 14-15. The phrase “driving information unit” can be found in figure 1 element 10, page 2 lines 13, 16-17 and 21; page 10 lines 22-25; page 11 lines 9-22; page 12 lines 9-24 and page 14 lines 2-10 in specification. Examiner interpret “driving information unit” as a software embedded in the hardware to acquire the information about the driving road of the vehicle. The phrase “sensing unit” can be found in figure 1 element 30, page 2 lines 11-20; page 10 lines 7-25; page 11 lines 1-14; page 13 lines 1-23 and page 14 lines 2-7. Examiner interpret “sensing unit” as a camera sensor, a radar sensor or a lidar sensor. The phrase “determiner” can be found in figure 1 element 20, page 2 lines 11-25, page 10 lines 3-6 and 18-22, page 11 4-25, page 12 lines 1-5; page 14 lines 16-20 and page 15 lines 1-10. Examiner interpret “determiner” as a software embedded in the hardware to calculate a distance from the current driving position of the vehicle to the joining point.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 3, 6-9, 11 and 14-15 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-13 of U.S. Patent No. 11941981. Although the claims at issue are not identical, they are not patentably distinct from each other because of the same inventive entity or name at least one joint inventor in common. The instant application 18584135 is rejected with the Non-Statutory Double Patenting with above US Patent No. 11941981 because of the same inventive enity or name at least one joint inventor in common. The patented claims disclose each and every limitation of the instant application’s claims or obvious modification thereof. The patent claims claiming a system and a method of detecting obstacle surround a vehicle and expand the sensing range in response to merge or join with a target road and the instant application claims are also claiming a similar concept. The different between the patent claims and the instant application claims are: The patent claims claimed extend a lateral width of the sensing range base on the lane width of the target road to be joined and the instant application claims claimed extend the sensing range base on the lane width of the target road to be joined. Since both of the patent claims and the instant application claims are claimed the extend the coverage area of the road, therefore, it’s obvious to one of ordinary skill in the art to apply an sensing component to provide the coverage area of the road that the vehicle travel on. Also figure 2 and col 2 lines 18-24 of the patent clearly disclose the sensing range extend on a lateral width of the sensing range base on the lane width of the target road to be joined. Please see the claims mapping in the non-statutory double patenting table below.
Non-Statutory Double Patenting Table:
Instant Application No. 18584135
US Patent No. 11941981
1. (Currently amended) An obstacle detection system of a vehicle, the obstacle detection system comprising: a driving information unit configured to calculate driving position information of the vehicle and a lane width of a target road to be joined; a determiner configured to anticipate whether the vehicle will enter a joining point where the vehicle meets the target road to be joined based on the driving position information; a sensing unit configured to sense obstacles located beside the vehicle; and a controller configured to change a sensing range of the sensing unit to position the sensing range on the target road and detect an obstacle moving on the target road to be joined, based on that the determiner anticipates that the vehicle will enter the joining point and to extend the sensing range based on the lane width of the target road to be joined, wherein the driving information unit is further configured to calculate an angle of entry between a driving road of the vehicle and the target road to be joined based on information about the driving road of the vehicle and information about the target road to be joined, calculated by the driving information unit, wherein the controller is further configured to rotate the sensing range of the sensing unit based on the angle of entry between the driving road of the vehicle and the target road to be joined, calculated by the driving information unit so that the sensing range is to be located on the target road, wherein the determiner is further configured to calculate a distance from a driving position of the vehicle to the joining point based on the driving position information and the information about the driving road of the vehicle calculated by the driving information unit, and wherein the controller is further configured to extend the sensing range of the sensing unit based on the distance from the driving position of the vehicle to the joining point, calculated by the determiner, wherein the controller is further configured to extend a lateral width of the sensing range based on the lane width of the target road to be joined.
1. An obstacle detection system of a vehicle, the obstacle detection system comprising: a driving information unit configured to calculate driving position information of the vehicle; a determiner configured to anticipate whether the vehicle will enter a joining point where the vehicle meets a target road to be joined based on the driving position information calculated by the driving information unit; a sensing unit configured to sense obstacles located beside the vehicle; and a controller configured to change a sensing range of the sensing unit so as to detect an obstacle moving on the target road to be joined, in response that the determiner anticipates that the vehicle will enter the joining point, wherein the driving information unit is further configured to calculate a lane width of the target road to be joined based on information about the target road to be joined, calculated by the driving information unit, and wherein the controller is further configured to extend a lateral width of the sensing range based on the lane width of the target road to be joined, calculated by the driving information unit.
4. The obstacle detection system according to claim 2, wherein: the driving information unit is further configured to calculate an angle of entry between the driving road of the vehicle and the target road to be joined based on the information about the driving road of the vehicle and the information about the target road to be joined, calculated by the driving information unit; and the controller is further configured to rotate the sensing range of the sensing unit based on the angle of entry between the driving road of the vehicle and the target road to be joined, calculated by the driving information unit.
3. The obstacle detection system according to claim 2, wherein: the determiner is further configured to calculate a distance from a driving position of the vehicle to the joining point based on the driving position information and the information about the driving road of the vehicle calculated by the driving information unit; and the controller is further configured to extend the sensing range of the sensing unit based on the distance from the driving position of the vehicle to the joining point, calculated by the determiner.
3. (Previously presented) The obstacle detection system according to claim 1, wherein: the driving information unit is further configured to calculate information about the driving road of the vehicle and information about the target road to be joined; and the controller is configured to change the sensing range of the sensing unit based on the information about the driving road of the vehicle and the information about the target road to be joined, calculated by the driving information unit, when the determiner anticipates that the vehicle will enter the joining point.
2. The obstacle detection system according to claim 1, wherein: the driving information unit is further configured to calculate information about a driving road of the vehicle and the information about the target road to be joined; and the controller is configured to change the sensing range of the sensing unit based on the information about the driving road of the vehicle and the information about the target road to be joined, calculated by the driving information unit, when the determiner anticipates that the vehicle will enter the joining point.
1. (Currently amended) An obstacle detection system of a vehicle, the obstacle detection system comprising:… wherein the determiner is further configured to calculate a distance from a driving position of the vehicle to the joining point based on the driving position information and the information about the driving road of the vehicle calculated by the driving information unit, and wherein the controller is further configured to extend the sensing range of the sensing unit based on the distance from the driving position of the vehicle to the joining point, calculated by the determiner.
3. The obstacle detection system according to claim 2, wherein: the determiner is further configured to calculate a distance from a driving position of the vehicle to the joining point based on the driving position information and the information about the driving road of the vehicle calculated by the driving information unit; and the controller is further configured to extend the sensing range of the sensing unit based on the distance from the driving position of the vehicle to the joining point, calculated by the determiner.
6. (Previously presented) The obstacle detection system according to claim 1, wherein: the driving information unit is further configured to calculate moving information of the vehicle; and the controller is further configured to generate a warning signal based on the moving information of the vehicle and the distance from the driving position of the vehicle to the joining point when the sensing unit detects the obstacle within the changed sensing range.
5. The obstacle detection system according to claim 1, wherein: the driving information unit is further configured to calculate moving information of the vehicle; and the controller is further configured to generate a warning signal based on the moving information of the vehicle and a distance from a driving position of the vehicle to the joining point when the sensing unit detects the obstacle within the changed sensing range.
7. (Currently amended) The obstacle detection system according to claim 1, wherein: the driving information unit is further configured to calculate the information about the driving road of the vehicle and the information about the target road to be joined; and the determiner is configured to anticipate whether or not the vehicle will enter the joining point by comparing a width of the driving road and a width of the target road to be joined, calculated by the driving information unit, with each other.
6. The obstacle detection system according to claim 1, wherein: the driving information unit is further configured to calculate information about a driving road of the vehicle and the information about the target road to be joined; and the determiner is configured to anticipate whether or not the vehicle will enter the joining point by comparing a width of the driving road and a width of the target road to be joined, calculated by the driving information unit, with each other.
8. (Original) The obstacle detection system according to claim 1, wherein: the controller is further configured to longitudinally expand the sensing range of the sensing unit with respect to the target road to be joined, when the determiner anticipates that the vehicle will enter the joining point.
7. The obstacle detection system according to claim 1, wherein: the controller is further configured to longitudinally expand the sensing range of the sensing unit with respect to the target road to be joined, when the determiner anticipates that the vehicle will enter the joining point.
9. (Currently amended) An obstacle detection method of a vehicle, the obstacle detection method comprising: calculating, by a driving information unit, driving position information of the vehicle and a lane width of a target road to be joined; anticipating, by a determiner, whether the vehicle will enter a joining point where the vehicle meets [[a]] the target road to be joined based on the driving position information; changing, by a controller, a sensing range of a sensing unit to position the sensing range on the target road and detect an obstacle moving on the target road to be joined in response to the determiner anticipating that the vehicle will enter the joining point; and calculating, by the driving information unit, an angle of entry between a driving road of the vehicle and the target road to be joined based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein, the changing of the sensing range includes extending, by the controller, the sensing range based on the lane width of the target road to be joined wherein, the changing of the sensing range includes rotating, by the controller, the sensing range of the sensing unit based on the calculated angle of entry between the driving road of the vehicle and the target road to be joined so that the sensing range is to be located on the target road, and wherein the method further includes: calculating, by the determiner, a distance from a driving position of the vehicle to the joining point based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein the changing of the sensing range further includes extending, by the controller, the sensing range based on the calculated distance from the driving position of the vehicle to the joining point; and wherein the changing of the sensing range includes extending, by the controller, a lateral width of the sensing range based on the lane width of the target road to be joined.
8. An obstacle detection method of a vehicle, the obstacle detection method comprising: calculating, by a driving information unit, driving position information of the vehicle; anticipating, by a determiner, whether the vehicle will enter a joining point where the vehicle meets a target road to be joined based on the calculated driving position information; and changing, by a controller, a sensing range of a sensing unit so as to detect an obstacle moving on the target road to be joined in response to the determiner anticipating that the vehicle will enter the joining point, wherein the obstacle detection method further comprising: calculating, by the driving information unit, a lane width of the target road to be joined based on information about the target road to be joined and driving position information, wherein, the changing of the sensing range includes extending, by the controller, a lateral width of the sensing range based on the calculated lane width of the target road to be joined.
11. The obstacle detection method according to claim 9, further comprising calculating, by the driving information unit, an angle of entry between the driving road of the vehicle and the target road to be joined based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein, the changing of the sensing range includes rotating, by the controller, the sensing range of the sensing unit based on the calculated angle of entry between the driving road of the vehicle and the target road to be joined.
10. The obstacle detection method according to claim 9, further comprising calculating, by the determiner, a distance from a driving position of the vehicle to the joining point based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein, the changing of the sensing range includes extending, by the controller, the sensing range based on the calculated distance from the driving position of the vehicle to the joining point.
11. (Previously presented) The obstacle detection method according to claim 9, further comprising calculating, by the driving information unit, information about [ the driving road of the vehicle and information about the target road to be joined, wherein, the determiner anticipates that the vehicle will enter the joining point, and in the changing of the sensing range, the sensing range is changed based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined.
9. The obstacle detection method according to claim 8, further comprising calculating, by the driving information unit, information about a driving road of the vehicle and the information about the target road to be joined, wherein, the determiner anticipates that the vehicle will enter the joining point, and in the changing of the sensing range, the sensing range is changed based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined.
9. (Currently amended) An obstacle detection method of a vehicle, the obstacle detection method comprising:…. herein the method further includes: calculating, by the determiner, a distance from a driving position of the vehicle to the joining point based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein the changing of the sensing range further includes extending, by the controller, the sensing range based on the calculated distance from the driving position of the vehicle to the joining point; and wherein the changing of the sensing range includes extending, by the controller, a lateral width of the sensing range based on the lane width of the target road to be joined.
10. The obstacle detection method according to claim 9, further comprising calculating, by the determiner, a distance from a driving position of the vehicle to the joining point based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein, the changing of the sensing range includes extending, by the controller, the sensing range based on the calculated distance from the driving position of the vehicle to the joining point.
14. (Previously presented) The obstacle detection method according to claim 11, further comprising: calculating, by the driving information unit, moving information of the vehicle; calculating, by the determiner, the distance from the driving position of the vehicle to the joining point based on the calculated information about the driving road of the vehicle and the calculated driving position information; and generating, by the controller, a warning signal based on the calculated moving information of the vehicle and the calculated distance from the driving position of the vehicle to the joining point in response to sensing, by the sensing unit, the obstacle within the changed sensing range.
12. The obstacle detection method according to claim 9, further comprising: calculating, by the driving information unit, moving information of the vehicle; calculating, by the determiner, a distance from a driving position of the vehicle to the joining point based on the calculated information about the driving road of the vehicle and the calculated driving position information; and generating, by the controller, a warning signal based on the calculated moving information of the vehicle and the calculated distance from the driving position of the vehicle to the joining point in response to sensing, by the sensing unit, the obstacle within the changed sensing range.
15. (Original) The obstacle detection method according to claim 9, wherein, the determiner anticipates that the vehicle will enter the joining point, and the changing of the sensing range includes longitudinally extending, by the controller, the sensing range of the sensing unit with respect to the target road to be joined.
13. The obstacle detection method according to claim 8, wherein, the determiner anticipates that the vehicle will enter the joining point, and the changing of the sensing range includes longitudinally extending, by the controller, the sensing range of the sensing unit with respect to the target road to be joined.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 8-11 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Lee US 20200225678, in view of Takii et al. US 20190248279, in view of Yu US 20210284165 and further in view of Wang et al. US 20200247412.
Regarding claim 1, Lee teaches An obstacle detection system of a vehicle, the obstacle detection system comprising: a driving information unit configured to calculate driving position information of the vehicle (Lee US 20200225678 abstract; paragraphs [0012]-[0015]; [0061]-[0062]; [0101]-[0108]; [0114]-[0119]; [0127]-[0138]; [0326]-[0332]; [0346]; [0403]-[0414]; [0493]-[0500]; figures 1-20;)
The object detecting apparatus 300 is an apparatus for detecting an object located at outside of the vehicle 100 (par. 101). As one example, when the electrical part is a sensor, the vehicle driving information may be sensing information obtained by the sensor (Lee par. 327). The vehicle driving information includes vehicle information and surrounding information of the vehicle. The information related to an inside of the vehicle with respect to the frame of the vehicle 100 may be defined as vehicle information, and the information related to an outside of the vehicle may be defined as surrounding information (Lee par. 328). Vehicle information denotes information regarding the vehicle itself. For example, the vehicle information may include at least one of a driving speed of the vehicle, a driving direction, an acceleration, an angular speed, a position (GPS), a weight, a number of vehicle occupants, a braking force of the vehicle, a maximum braking force of the vehicle, an air pressure of each wheel, a centrifugal force applied to the vehicle, a driving mode of the vehicle (whether it is an autonomous driving mode or a manual driving mode), a parking mode of the vehicle (autonomous parking mode, automatic parking mode, manual parking mode), whether or not a user is on board the vehicle, information related to the user, and the like (Lee par. 329).
and a lane width of a target road to be joined; a sensing unit configured to sense obstacles located beside the vehicle; and a controller configured to change a sensing range of the sensing unit to position the sensing range on the target road and detect an obstacle moving on the target road to be joined, based on that the determiner anticipates that the vehicle will enter the joining point and to extend the sensing range based on the lane width of the target road to be joined,
An overall length refers to a length from a front end to a rear end of the vehicle 100, a width refers to a width of the vehicle 100, and a height refers to a length from a bottom of a wheel to a roof. In the following description, an overall-length direction L may refer to a direction which is a criterion for measuring the overall length of the vehicle 100, a width direction W may refer to a direction that is a criterion for measuring a width of the vehicle 100, and a height direction H may refer to a direction that is a criterion for measuring a height of the vehicle 100 (Lee par. 62). The object detecting apparatus 300 is an apparatus for detecting an object located at outside of the vehicle 100 (Lee par. 101). At least one of the shape or the size of the predetermined range may vary according to characteristics of a road located at the position of the vehicle 100. For example, the predetermined range may change to more cover a left side of the vehicle 100 so that sensors can sense a road-merged direction in a ramp section where a road located at the left side of the vehicle 100 is merged (Lee par. 500).
According to the cited passages and figures, examiner interpret the system can obtain the characteristics of the road like shape or size and the system extend more sensing coverage to the left side of the vehicle in response to merge in to a ramp direction.
and wherein the controller is further configured to extend a lateral width of the sensing range based on the lane width of the target road to be joined.
An overall length refers to a length from a front end to a rear end of the vehicle 100, a width refers to a width of the vehicle 100, and a height refers to a length from a bottom of a wheel to a roof. In the following description, an overall-length direction L may refer to a direction which is a criterion for measuring the overall length of the vehicle 100, a width direction W may refer to a direction that is a criterion for measuring a width of the vehicle 100, and a height direction H may refer to a direction that is a criterion for measuring a height of the vehicle 100 (Lee par. 62). At least one of the shape or the size of the predetermined range may vary according to characteristics of a road located at the position of the vehicle 100. For example, the predetermined range may change to more cover a left side of the vehicle 100 so that sensors can sense a road-merged direction in a ramp section where a road located at the left side of the vehicle 100 is merged (Lee par. 500). The processor 830 may classify, into the first group, a main road corresponding to the forward path information, and a sub road through which another vehicle can enter the main road according to a preset criterion, among those roads (Lee par. 504). The processor 830 may determine validity of the object based on the classified roads (S1530) (Lee par. 506).
According to the cited passages and figures, examiner interprets the predetermined range may change to move cover a left side of a vehicle as the extend of the lateral width of the sensing range as mention in paragraph 500. Also paragraph 500 disclose the shape or the size of the predetermine range may vary according to characteristics of a road located at the position of the vehicle and paragraph 504 disclose the processor for classify the main road corresponding to the forward path information, and a sub road through which another vehicle can enter the main road according to a preset criterion, among those road. Therefore, one of ordinary skill in a art will consider the lane width of the target road to be joined as one of characteristic and criteria value information to be detected by the vehicle sensing unit.
Lee does not explicitly teach a determiner configured to anticipate whether the vehicle will enter a joining point where the vehicle meets the target road to be joined based on the driving position information; wherein the driving information unit is further configured to calculate an angle of entry between a driving road of the vehicle and the target road to be joined based on information about the driving road of the vehicle and information about the target road to be joined, calculated by the driving information unit, wherein the controller is further configured to rotate the sensing range of the sensing unit based on the angle of entry between the driving road of the vehicle and the target road to be joined, calculated by the driving information unit so that the sensing range is to be located on the target road, wherein the determiner is further configured to calculate a distance from a driving position of the vehicle to the joining point based on the driving position information and the information about the driving road of the vehicle calculated by the driving information unit, and wherein the controller is further configured to extend the sensing range of the sensing unit based on the distance from the driving position of the vehicle to the joining point, calculated by the determiner.
Takii et al. teach a determiner configured to anticipate whether the vehicle will enter a joining point where the vehicle meets the target road to be joined based on the driving position information; (Takii et al. US 20190248279 paragraph [0043]-[0048]; figures 1-7;)
As shown in FIG. 3, in step S1, the illumination controller 47 determines whether the vehicle 1 has stopped before entering the main traffic lane R2 from the merging traffic lane R1. For example, when a signal, which indicates that the vehicle 1 has stopped before entering the main traffic lane R2 from the merging traffic lane R1, is received from the vehicle controller 3, the illumination controller 47 determines that the vehicle 1 has stopped before entering the main traffic lane R2. When a determination result in step S1 is YES, the processing proceeds to step S2. On the other hand, when the determination result in step S1 is NO, the processing is over. In the meantime, when the vehicle 1 is traveling in the advanced driving support mode or the fully autonomous driving mode, the vehicle controller 3 autonomously determines whether the vehicle 1 can enter the main traffic lane R2 from the merging traffic lane R1, based on detection data indicative of the surrounding environment of the vehicle 1 and acquired by the camera 6 and/or the radar 7. Thereafter, when it is determined that the vehicle 1 cannot enter the main traffic lane R2 due to the other vehicle existing on a future pathway of the vehicle 1, the vehicle controller 3 stops the vehicle 1 in the vicinity of a merging point of the merging traffic lane R1 and the main traffic lane R2 (Takii et al. par. 44). Then, in step S4, the illumination controller 47 determines whether the vehicle 1 has entered the main traffic lane R2 from the merging traffic lane R1. For example, when a signal, which indicates that the vehicle 1 has entered the main traffic lane R2 from the merging traffic lane R1, is received from the vehicle controller 3, the illumination controller 47 determines that the vehicle 1 has entered the main traffic lane R2 from the merging traffic lane R1. When a determination result in step S4 is YES, the processing proceeds to step S5. On the other hand, when the determination result in step S4 is NO, the processing of step S3 is again executed (Takii et al. par. 48).
Therefore, It would have been obviously to one of ordinary skill in the art before the effective filing date of the claim invention to combine Lee and Takii et al. by comprising the teaching of Takii et al. into the system of Lee. The motivation to combine these arts is to provide a controller to determine whether the vehicle can merge from one lane to another lane from Takii et al. reference into Lee reference so the system guide the user with the safety maneuver merging to another lane.
The combination of Lee and Takii et al. do not explicitly teach wherein the driving information unit is further configured to calculate an angle of entry between a driving road of the vehicle and the target road to be joined based on information about the driving road of the vehicle and information about the target road to be joined, calculated by the driving information unit, wherein the controller is further configured to rotate the sensing range of the sensing unit based on the angle of entry between the driving road of the vehicle and the target road to be joined, calculated by the driving information unit so that the sensing range is to be located on the target road, wherein the determiner is further configured to calculate a distance from a driving position of the vehicle to the joining point based on the driving position information and the information about the driving road of the vehicle calculated by the driving information unit, and wherein the controller is further configured to extend the sensing range of the sensing unit based on the distance from the driving position of the vehicle to the joining point, calculated by the determiner.
Yu teaches wherein the driving information unit is further configured to calculate an angle of entry between a driving road of the vehicle and the target road to be joined based on information about the driving road of the vehicle and information about the target road to be joined, calculated by the driving information unit, wherein the controller is further configured to rotate the sensing range of the sensing unit based on the angle of entry between the driving road of the vehicle and the target road to be joined, calculated by the driving information unit so that the sensing range is to be located on the target road. (Yu US 20210284165 abstract; paragraph [0018]; [0025]; [0034]-[0038]; [0051]; [0063]; [0069]-[0073]; [0076]; [0084]-[0086]; [0090]; [0095]; [0102]; figures 1-13;)
As shown in FIG. 3, when the automated driving control device 100 travels in the lane L3 and plans to change lanes to the lane L4, the automated driving control device 100 recognizes the end TS of the zebra zone and the end TE of the zebra zone, and recognizes the target area TA based on the recognition result. Then, the automated driving control device 100 generates a plan for entering the lane L4 from the lane L3 in the target area TA, and allows the vehicle M to enter the lane L4 from the lane L3 based on the generated plan (Yu par. 69). In the present embodiment, when the vehicle M has failed to recognize the end TE of the zebra zone S2 as described above, the determiner 142 determines the steering angle control mode for searching for the end TE (target) of the zebra zone S2 based on the recognition result of the second recognizer 136 (the second self-position and the orientation of the vehicle M). Then, the automated driving control device 100 controls the steering based on the determined steering angle control mode. The process of searching for the end TE of the zebra zone S2 will be described below (see FIG. 8 to FIG. 11) (Yu par. 71). The second recognizer 136 recognizes intersections of the assumed virtual line IL and the respective road division lines D1 to D4, and derives angles formed by the virtual line IL and the respective road division lines D1 to D4 (angles formed by the virtual line IL and predetermined road division lines among the road division lines D1 to D4) at the intersections. The second recognizer 136 recognizes an angle 01 formed by the aforementioned process. Then, the second recognizer 136 recognizes that the reference direction of the vehicle M is rotated by θ2 with respect to the road division line (Yu par. 86).
As show in the figures 9-10, the vehicle M has rotated to the angle θ2 to enter lane 4 as show in the figures 3 and 5-11 and examiner interpret the object recognition device 16 as the sensing device that rotate with the same angle as vehicle M turn to the direction toward to the lane 4.
Therefore, It would have been obviously to one of ordinary skill in the art before the effective filing date of the claim inventio to combine Lee and Takii et al. with Yu by comprising the teaching of Yu into the system of Lee and Takii et al.. The motivation to combine these arts is to determine the steering angle control mode for searching for the end TE (target) of the zebra zone of the new lane for vehicle to merge from the current lane to the new lane from Yu reference into Lee and Takii et al. reference for the vehicle can detect the surrounding environment of the new lane so the vehicle can maneuver safety from the current lane into the new lane.
The combination of Lee, Takii et al. and Yu do not explicitly teach wherein the determiner is further configured to calculate a distance from a driving position of the vehicle to the joining point based on the driving position information and the information about the driving road of the vehicle calculated by the driving information unit, and wherein the controller is further configured to extend the sensing range of the sensing unit based on the distance from the driving position of the vehicle to the joining point, calculated by the determiner.
Wang et al. teach wherein the determiner is further configured to calculate a distance from a driving position of the vehicle to the joining point based on the driving position information and the information about the driving road of the vehicle calculated by the driving information unit, (Wang et al. US 20200247412 abstract; paragraph [0064]-[0071]; [0079]-[0083]; [0094]-[0100] figures 1-2, 5 and 9-11;)
In some embodiments, to determine the estimated merging timestamp of a vehicle platform 103 traveling in the first lane or the second lane, the reference vehicle processor 202 may analyze the vehicle movement data of the vehicle platform 103 to extract the vehicle position and the vehicle speed (including the speed's rate of change in some cases) of the vehicle platform 103. The reference vehicle processor 202 may determine the distance d between the vehicle position of the vehicle platform 103 and the merging point (e.g., 240 m), and compute the travel time Δ.sub.t for the vehicle platform 103 to travel the distance d and reach the merging point using the longitudinal speed υ.sub.longitudial of the vehicle platform 103 (e.g., 30 m/s) (Wang et al. par. 64).
and wherein the controller is further configured to extend the sensing range of the sensing unit based on the distance from the driving position of the vehicle to the joining point, calculated by the determiner.
Referring back to FIG. 5, in block 506, the merging plan processor 204 may optionally determine a position range for positioning the merging vehicle in the second lane based on the simulated position of the reference vehicle in the second lane indicated by the virtual target. In some embodiments, the position range may include a minimum (min) region, a safe region, and a max region (Wang et al. par. 94). In block 508, the virtual assistance information renderer 208 may optionally overlay a virtual position indicator indicating the position range for the merging vehicle in the field of view of the driver of the merging vehicle. For example, as depicted in FIG. 11A, the virtual assistance information renderer 208 may render the virtual position indicator 1140 in the field of view 1100 of the driver of the merging vehicle. As shown, the virtual position indicator 1140 may be rendered relative to the virtual target 1104 on the front display surface 1120 and may indicate the min region, the safe region, the max region of the position range that are located behind the simulated position of the reference vehicle indicated by the virtual target 1104. In some embodiments, the virtual assistance information renderer 208 may also render a merging instruction 1142 in the field of view 1100 instructing the driver of the merging vehicle to follow the virtual target 1104 to smoothly perform the merging process. To follow the virtual target 1104, the driver of the merging vehicle may position the merging vehicle in the lane 1122 according to the regions indicated by the virtual position indicator 1140, thereby maintaining an appropriate following distance to the simulated position of the reference vehicle 1102 indicated by the virtual target 1104. As the merging vehicle maintains an appropriate following distance to the simulated position of the reference vehicle, the merging vehicle can smoothly merge with the reference vehicle as the merging vehicle reaches the merging point (Wang et al. par. 95).
Therefore, It would have been obviously to one of ordinary skill in the art before the effective filing date of the claim inventio to combine Lee, Takii et al. and Yu with Wang et al. by comprising the teaching of Wang et al. into the system of Lee, Takii et al. and Yu. The motivation to combine these arts is to provide a distance between the vehicle position and merging point from Wang et al. reference into Lee, Takii et al. and Yu reference so the system can help the uses merging into the safe region.
Regarding claim 2, the combination of Lee, Takii et al., Yu and Wang et al. disclose The obstacle detection system according to claim 1, wherein: the lane width of the target road to be joined is calculated based on information about the target road to be joined, calculated by the driving information unit.
An overall length refers to a length from a front end to a rear end of the vehicle 100, a width refers to a width of the vehicle 100, and a height refers to a length from a bottom of a wheel to a roof. In the following description, an overall-length direction L may refer to a direction which is a criterion for measuring the overall length of the vehicle 100, a width direction W may refer to a direction that is a criterion for measuring a width of the vehicle 100, and a height direction H may refer to a direction that is a criterion for measuring a height of the vehicle 100 (Lee par. 62). At least one of the shape or the size of the predetermined range may vary according to characteristics of a road located at the position of the vehicle 100. For example, the predetermined range may change to more cover a left side of the vehicle 100 so that sensors can sense a road-merged direction in a ramp section where a road located at the left side of the vehicle 100 is merged (Lee par. 500). The processor 830 may classify, into the first group, a main road corresponding to the forward path information, and a sub road through which another vehicle can enter the main road according to a preset criterion, among those roads (Lee par. 504). The processor 830 may determine validity of the object based on the classified roads (S1530) (Lee par. 506).
Regarding claim 3, the combination of Lee, Takii et al., Yu and Wang et al. disclose The obstacle detection system according to claim 1, wherein: the driving information unit is further configured to calculate the information about the driving road of the vehicle and information about the target road to be joined; and the controller is configured to change the sensing range of the sensing unit based on the information about the driving road of the vehicle and the information about the target road to be joined, calculated by the driving information unit, when the determiner anticipates that the vehicle will enter the joining point.
The device where determining the validity of the object includes: classifying a plurality of roads, which are located within a predetermined range from the vehicle, into a first group and a second group, based on the forward path information; and determining the object to be valid based on the object being located on a road of the first group; and determining the object to be invalid based on the object being located on a road of the second group. The device where the operations further include: classifying, into the first group, (i) a main road, among the plurality of roads, that corresponds to the forward path information, and (ii) a sub road, among the plurality of roads, through which another vehicle is allowed to enter the main road; and classifying, into the second group, remaining roads, among the plurality of roads, except for the main road and the sub road. The device where the operations further include: calculating, based on a speed of the object, at least one of a location or a time at which the object is allowed to enter the main road; and selectively determining the sub road based on comparing (i) the calculated at least one of the location or the time with (ii) a preset criterion. The device where the sub road differs depending on at least one of a speed of the vehicle or a speed of the object. The device where the operations further include: adjusting a detectable range of the at least one sensor such that the at least one sensor senses the road included in the first group and does not sense the road included in the second group (Lee par. 13). At least one of the shape or the size of the predetermined range may vary according to characteristics of a road located at the position of the vehicle 100. For example, the predetermined range may change to more cover a left side of the vehicle 100 so that sensors can sense a road-merged direction in a ramp section where a road located at the left side of the vehicle 100 is merged (Lee par. 500). The processor 830 may classify, into the first group, a main road corresponding to the forward path information, and a sub road through which another vehicle can enter the main road according to a preset criterion, among those roads (Lee par. 504). The processor 830 may determine validity of the object based on the classified roads (S1530) (Lee par. 506).
Regarding claim 8, the combination of Lee, Takii et al., Yu and Wang et al. disclose The obstacle detection system according to claim 1, wherein: the controller is further configured to longitudinally expand the sensing range of the sensing unit with respect to the target road to be joined, when the determiner anticipates that the vehicle will enter the joining point.
As shown in FIG. 3, when the automated driving control device 100 travels in the lane L3 and plans to change lanes to the lane L4, the automated driving control device 100 recognizes the end TS of the zebra zone and the end TE of the zebra zone, and recognizes the target area TA based on the recognition result. Then, the automated driving control device 100 generates a plan for entering the lane L4 from the lane L3 in the target area TA, and allows the vehicle M to enter the lane L4 from the lane L3 based on the generated plan (Yu par. 69). In the present embodiment, when the vehicle M has failed to recognize the end TE of the zebra zone S2 as described above, the determiner 142 determines the steering angle control mode for searching for the end TE (target) of the zebra zone S2 based on the recognition result of the second recognizer 136 (the second self-position and the orientation of the vehicle M). Then, the automated driving control device 100 controls the steering based on the determined steering angle control mode. The process of searching for the end TE of the zebra zone S2 will be described below (see FIG. 8 to FIG. 11) (Yu par. 71). The second recognizer 136 recognizes intersections of the assumed virtual line IL and the respective road division lines D1 to D4, and derives angles formed by the virtual line IL and the respective road division lines D1 to D4 (angles formed by the virtual line IL and predetermined road division lines among the road division lines D1 to D4) at the intersections. The second recognizer 136 recognizes an angle 01 formed by the aforementioned process. Then, the second recognizer 136 recognizes that the reference direction of the vehicle M is rotated by θ2 with respect to the road division line (Yu par. 86).
As show in the figures 9-10, the vehicle M has rotated to the angle θ2 to enter lane 4 as show in the figures 3 and 5-11 and examiner interpret the object recognition device 16 as the sensing device that rotate with the same angle as vehicle M turn to the direction toward to the lane 4. It’s obviously the sensing range of vehicle change relative to the change of the vehicle M angle as show in the figures 3 and 5-11. Once the vehicle M change the angle and it’s obviously the latitude and longitude position changing as well. Therefore, the latitude and longitude detection range of the vehicle M will change corresponding to the changing direction and position of the vehicle M.
Regarding claim 9, Lee teaches An obstacle detection method of a vehicle, the obstacle detection method comprising: calculating, by a driving information unit, driving position information of the vehicle (Lee US 20200225678 abstract; paragraphs [0012]-[0015]; [0061]-[0062]; [0101]-[0108]; [0114]-[0119]; [0127]-[0138]; [0326]-[0332]; [0346]; [0403]-[0414]; [0493]-[0506]; [0508]-[0510]; [0512]-[0516]; [0527]-[0530]; figures 1-20;)
The object detecting apparatus 300 is an apparatus for detecting an object located at outside of the vehicle 100 (Lee par. 101). As one example, when the electrical part is a sensor, the vehicle driving information may be sensing information obtained by the sensor (Lee par. 327). The vehicle driving information includes vehicle information and surrounding information of the vehicle. The information related to an inside of the vehicle with respect to the frame of the vehicle 100 may be defined as vehicle information, and the information related to an outside of the vehicle may be defined as surrounding information (Lee par. 328). Vehicle information denotes information regarding the vehicle itself. For example, the vehicle information may include at least one of a driving speed of the vehicle, a driving direction, an acceleration, an angular speed, a position (GPS), a weight, a number of vehicle occupants, a braking force of the vehicle, a maximum braking force of the vehicle, an air pressure of each wheel, a centrifugal force applied to the vehicle, a driving mode of the vehicle (whether it is an autonomous driving mode or a manual driving mode), a parking mode of the vehicle (autonomous parking mode, automatic parking mode, manual parking mode), whether or not a user is on board the vehicle, information related to the user, and the like (Lee par. 329).
and a lane width of a target road to be joined; changing, by a controller, a sensing range of a sensing unit to position the sensing range on the target road and detect an obstacle moving on the target road to be joined in response to the determiner anticipating that the vehicle will enter the joining point; wherein, the changing of the sensing range includes extending, by the controller, the sensing range based on the lane width of the target road to be joined
An overall length refers to a length from a front end to a rear end of the vehicle 100, a width refers to a width of the vehicle 100, and a height refers to a length from a bottom of a wheel to a roof. In the following description, an overall-length direction L may refer to a direction which is a criterion for measuring the overall length of the vehicle 100, a width direction W may refer to a direction that is a criterion for measuring a width of the vehicle 100, and a height direction H may refer to a direction that is a criterion for measuring a height of the vehicle 100 (Lee par. 62). The object detecting apparatus 300 is an apparatus for detecting an object located at outside of the vehicle 100 (Lee par. 101). At least one of the shape or the size of the predetermined range may vary according to characteristics of a road located at the position of the vehicle 100. For example, the predetermined range may change to more cover a left side of the vehicle 100 so that sensors can sense a road-merged direction in a ramp section where a road located at the left side of the vehicle 100 is merged (Lee par. 500).
According to the cited passages and figures, examiner interpret the system can obtain the characteristics of the road like shape or size and the system extend more sensing coverage to the left side of the vehicle in response to merge in to a ramp direction.
and wherein the changing of the sensing range includes extending, by the controller, a lateral width of the sensing range based on the lane width of the target road to be joined.
An overall length refers to a length from a front end to a rear end of the vehicle 100, a width refers to a width of the vehicle 100, and a height refers to a length from a bottom of a wheel to a roof. In the following description, an overall-length direction L may refer to a direction which is a criterion for measuring the overall length of the vehicle 100, a width direction W may refer to a direction that is a criterion for measuring a width of the vehicle 100, and a height direction H may refer to a direction that is a criterion for measuring a height of the vehicle 100 (Lee par. 62). At least one of the shape or the size of the predetermined range may vary according to characteristics of a road located at the position of the vehicle 100. For example, the predetermined range may change to more cover a left side of the vehicle 100 so that sensors can sense a road-merged direction in a ramp section where a road located at the left side of the vehicle 100 is merged (Lee par. 500). The processor 830 may classify, into the first group, a main road corresponding to the forward path information, and a sub road through which another vehicle can enter the main road according to a preset criterion, among those roads (Lee par. 504). The processor 830 may determine validity of the object based on the classified roads (S1530) (Lee par. 506).
According to the cited passages and figures, examiner interprets the predetermined range may change to move cover a left side of a vehicle as the extend of the lateral width of the sensing range as mention in paragraph 500. Also paragraph 500 disclose the shape or the size of the predetermine range may vary according to characteristics of a road located at the position of the vehicle and paragraph 504 disclose the processor for classify the main road corresponding to the forward path information, and a sub road through which another vehicle can enter the main road according to a preset criterion, among those road. Therefore, one of ordinary skill in an art will consider the lane width of the target road to be joined as one of characteristic and criteria value information to be detected by the vehicle sensing unit.
Lee does not explicitly teach anticipating, by a determiner, whether the vehicle will enter a joining point where the vehicle meets the target road to be joined based on the driving position information; and calculating, by the driving information unit, an angle of entry between a driving road of the vehicle and the target road to be joined based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein, the changing of the sensing range includes rotating, by the controller, the sensing range of the sensing unit based on the calculated angle of entry between the driving road of the vehicle and the target road to be joined so that the sensing range is to be located on the target road, and wherein the method further includes: calculating, by the determiner, a distance from a driving position of the vehicle to the joining point based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein the changing of the sensing range further includes extending, by the controller, the sensing range based on the calculated distance from the driving position of the vehicle to the joining point.
Takii et al. teach anticipating, by a determiner, whether the vehicle will enter a joining point where the vehicle meets [[a]] the target road to be joined based on the driving position information; (Takii et al. US 20190248279 paragraph [0043]-[0048]; figures 1-7; )
As shown in FIG. 3, in step S1, the illumination controller 47 determines whether the vehicle 1 has stopped before entering the main traffic lane R2 from the merging traffic lane R1. For example, when a signal, which indicates that the vehicle 1 has stopped before entering the main traffic lane R2 from the merging traffic lane R1, is received from the vehicle controller 3, the illumination controller 47 determines that the vehicle 1 has stopped before entering the main traffic lane R2. When a determination result in step S1 is YES, the processing proceeds to step S2. On the other hand, when the determination result in step S1 is NO, the processing is over. In the meantime, when the vehicle 1 is traveling in the advanced driving support mode or the fully autonomous driving mode, the vehicle controller 3 autonomously determines whether the vehicle 1 can enter the main traffic lane R2 from the merging traffic lane R1, based on detection data indicative of the surrounding environment of the vehicle 1 and acquired by the camera 6 and/or the radar 7. Thereafter, when it is determined that the vehicle 1 cannot enter the main traffic lane R2 due to the other vehicle existing on a future pathway of the vehicle 1, the vehicle controller 3 stops the vehicle 1 in the vicinity of a merging point of the merging traffic lane R1 and the main traffic lane R2 (Takii et al. par. 44). Then, in step S4, the illumination controller 47 determines whether the vehicle 1 has entered the main traffic lane R2 from the merging traffic lane R1. For example, when a signal, which indicates that the vehicle 1 has entered the main traffic lane R2 from the merging traffic lane R1, is received from the vehicle controller 3, the illumination controller 47 determines that the vehicle 1 has entered the main traffic lane R2 from the merging traffic lane R1. When a determination result in step S4 is YES, the processing proceeds to step S5. On the other hand, when the determination result in step S4 is NO, the processing of step S3 is again executed (Takii et al. par. 48).
Therefore, It would have been obviously to one of ordinary skill in the art before the effective filing date of the claim invention to combine Lee and Takii et al. by comprising the teaching of Takii et al. into the method of Lee. The motivation to combine these arts is to provide a controller to determine whether the vehicle can merge from one lane to another lane from Takii et al. reference into Lee reference so the system guide the user with the safety maneuver merging to another lane.
The combination of Lee and Takii et al. do not explicitly teach and calculating, by the driving information unit, an angle of entry between a driving road of the vehicle and the target road to be joined based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein, the changing of the sensing range includes rotating, by the controller, the sensing range of the sensing unit based on the calculated angle of entry between the driving road of the vehicle and the target road to be joined so that the sensing range is to be located on the target road, and wherein the method further includes: calculating, by the determiner, a distance from a driving position of the vehicle to the joining point based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein the changing of the sensing range further includes extending, by the controller, the sensing range based on the calculated distance from the driving position of the vehicle to the joining point.
Yu teaches and calculating, by the driving information unit, an angle of entry between a driving road of the vehicle and the target road to be joined based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein, the changing of the sensing range includes rotating, by the controller, the sensing range of the sensing unit based on the calculated angle of entry between the driving road of the vehicle and the target road to be joined so that the sensing range is to be located on the target road. (Yu US 20210284165 abstract; paragraph [0018]; [0025]; [0034]-[0038]; [0051]; [0063]; [0069]-[0073]; [0076]; [0084]-[0086]; [0090]; [0095]; [0102]; figures 1-13;)
As shown in FIG. 3, when the automated driving control device 100 travels in the lane L3 and plans to change lanes to the lane L4, the automated driving control device 100 recognizes the end TS of the zebra zone and the end TE of the zebra zone, and recognizes the target area TA based on the recognition result. Then, the automated driving control device 100 generates a plan for entering the lane L4 from the lane L3 in the target area TA, and allows the vehicle M to enter the lane L4 from the lane L3 based on the generated plan (Yu par. 69). In the present embodiment, when the vehicle M has failed to recognize the end TE of the zebra zone S2 as described above, the determiner 142 determines the steering angle control mode for searching for the end TE (target) of the zebra zone S2 based on the recognition result of the second recognizer 136 (the second self-position and the orientation of the vehicle M). Then, the automated driving control device 100 controls the steering based on the determined steering angle control mode. The process of searching for the end TE of the zebra zone S2 will be described below (see FIG. 8 to FIG. 11) (Yu par. 71). The second recognizer 136 recognizes intersections of the assumed virtual line IL and the respective road division lines D1 to D4, and derives angles formed by the virtual line IL and the respective road division lines D1 to D4 (angles formed by the virtual line IL and predetermined road division lines among the road division lines D1 to D4) at the intersections. The second recognizer 136 recognizes an angle 01 formed by the aforementioned process. Then, the second recognizer 136 recognizes that the reference direction of the vehicle M is rotated by θ2 with respect to the road division line (Yu par. 86).
As show in the figures 9-10, the vehicle M has rotated to the angle θ2 to enter lane 4 as show in the figures 3 and 5-11 and examiner interpret the object recognition device 16 as the sensing device that rotate with the same angle as vehicle M turn to the direction toward to the lane 4.
Therefore, It would have been obviously to one of ordinary skill in the art before the effective filing date of the claim inventio to combine Lee and Takii et al. with Yu by comprising the teaching of Yu into the method of Lee and Takii et al.. The motivation to combine these arts is to determine the steering angle control mode for searching for the end TE (target) of the zebra zone of the new lane for vehicle to merge from the current lane to the new lane from Yu reference into Lee and Takii et al. reference for the vehicle can detect the surrounding environment of the new lane so the vehicle can maneuver safety from the current lane into the new lane.
The combination of Lee, Takii et al., Yu and Wang et al. do not explicitly teach and wherein the method further includes: calculating, by the determiner, a distance from a driving position of the vehicle to the joining point based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein the changing of the sensing range further includes extending, by the controller, the sensing range based on the calculated distance from the driving position of the vehicle to the joining point.
Wang et al. teach and wherein the method further includes: calculating, by the determiner, a distance from a driving position of the vehicle to the joining point based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, (Wang et al. US 20200247412 abstract; paragraph [0064]-[0071]; [0079]-[0083]; [0094]-[0100] figures 1-2, 5 and 9-11;)
In some embodiments, to determine the estimated merging timestamp of a vehicle platform 103 traveling in the first lane or the second lane, the reference vehicle processor 202 may analyze the vehicle movement data of the vehicle platform 103 to extract the vehicle position and the vehicle speed (including the speed's rate of change in some cases) of the vehicle platform 103. The reference vehicle processor 202 may determine the distance d between the vehicle position of the vehicle platform 103 and the merging point (e.g., 240 m), and compute the travel time Δ.sub.t for the vehicle platform 103 to travel the distance d and reach the merging point using the longitudinal speed υ.sub.longitudial of the vehicle platform 103 (e.g., 30 m/s) (Wang et al. par. 64).
wherein the changing of the sensing range further includes extending, by the controller, the sensing range based on the calculated distance from the driving position of the vehicle to the joining point.
Referring back to FIG. 5, in block 506, the merging plan processor 204 may optionally determine a position range for positioning the merging vehicle in the second lane based on the simulated position of the reference vehicle in the second lane indicated by the virtual target. In some embodiments, the position range may include a minimum (min) region, a safe region, and a max region (Wang et al. par. 94). In block 508, the virtual assistance information renderer 208 may optionally overlay a virtual position indicator indicating the position range for the merging vehicle in the field of view of the driver of the merging vehicle. For example, as depicted in FIG. 11A, the virtual assistance information renderer 208 may render the virtual position indicator 1140 in the field of view 1100 of the driver of the merging vehicle. As shown, the virtual position indicator 1140 may be rendered relative to the virtual target 1104 on the front display surface 1120 and may indicate the min region, the safe region, the max region of the position range that are located behind the simulated position of the reference vehicle indicated by the virtual target 1104. In some embodiments, the virtual assistance information renderer 208 may also render a merging instruction 1142 in the field of view 1100 instructing the driver of the merging vehicle to follow the virtual target 1104 to smoothly perform the merging process. To follow the virtual target 1104, the driver of the merging vehicle may position the merging vehicle in the lane 1122 according to the regions indicated by the virtual position indicator 1140, thereby maintaining an appropriate following distance to the simulated position of the reference vehicle 1102 indicated by the virtual target 1104. As the merging vehicle maintains an appropriate following distance to the simulated position of the reference vehicle, the merging vehicle can smoothly merge with the reference vehicle as the merging vehicle reaches the merging point (Wang et al. par. 95).
Therefore, It would have been obviously to one of ordinary skill in the art before the effective filing date of the claim inventio to combine Lee, Takii et al. and Yu with Wang et al. by comprising the teaching of Wang et al. into the method of Lee, Takii et al. and Yu. The motivation to combine these arts is to provide a distance between the vehicle position and merging point from Wang et al. reference into Lee, Takii et al. and Yu reference so the system can help the uses merging into the safe region.
Regarding claim 10, the combination of Lee, Takii et al., Yu and Wang et al. disclose The obstacle detection method according to The obstacle detection method according to wherein the lane width of the target road to be joined is calculated based on the calculated information about the target road to be joined and the calculated driving position information.
An overall length refers to a length from a front end to a rear end of the vehicle 100, a width refers to a width of the vehicle 100, and a height refers to a length from a bottom of a wheel to a roof. In the following description, an overall-length direction L may refer to a direction which is a criterion for measuring the overall length of the vehicle 100, a width direction W may refer to a direction that is a criterion for measuring a width of the vehicle 100, and a height direction H may refer to a direction that is a criterion for measuring a height of the vehicle 100 (Lee par. 62). At least one of the shape or the size of the predetermined range may vary according to characteristics of a road located at the position of the vehicle 100. For example, the predetermined range may change to more cover a left side of the vehicle 100 so that sensors can sense a road-merged direction in a ramp section where a road located at the left side of the vehicle 100 is merged (Lee par. 500). The processor 830 may classify, into the first group, a main road corresponding to the forward path information, and a sub road through which another vehicle can enter the main road according to a preset criterion, among those roads (Lee par. 504). The processor 830 may determine validity of the object based on the classified roads (S1530) (Lee par. 506).
According to the cited passages and figures, examiner interpret the system can obtain the characteristics of the road like shape or size and the system extend more sensing coverage to the left side of the vehicle in response to merge in to a ramp direction.
Regarding claim 11, the combination of Lee, Takii et al., Yu and Wang et al. disclose The obstacle detection method according to claim 9, further comprising calculating, by the driving information unit, information about the driving road of the vehicle and information about the target road to be joined, wherein, the determiner anticipates that the vehicle will enter the joining point, and in the changing of the sensing range, the sensing range is changed based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined.
The device where determining the validity of the object includes: classifying a plurality of roads, which are located within a predetermined range from the vehicle, into a first group and a second group, based on the forward path information; and determining the object to be valid based on the object being located on a road of the first group; and determining the object to be invalid based on the object being located on a road of the second group. The device where the operations further include: classifying, into the first group, (i) a main road, among the plurality of roads, that corresponds to the forward path information, and (ii) a sub road, among the plurality of roads, through which another vehicle is allowed to enter the main road; and classifying, into the second group, remaining roads, among the plurality of roads, except for the main road and the sub road. The device where the operations further include: calculating, based on a speed of the object, at least one of a location or a time at which the object is allowed to enter the main road; and selectively determining the sub road based on comparing (i) the calculated at least one of the location or the time with (ii) a preset criterion. The device where the sub road differs depending on at least one of a speed of the vehicle or a speed of the object. The device where the operations further include: adjusting a detectable range of the at least one sensor such that the at least one sensor senses the road included in the first group and does not sense the road included in the second group (Lee par. 13). At least one of the shape or the size of the predetermined range may vary according to characteristics of a road located at the position of the vehicle 100. For example, the predetermined range may change to more cover a left side of the vehicle 100 so that sensors can sense a road-merged direction in a ramp section where a road located at the left side of the vehicle 100 is merged (Lee par. 500). The processor 830 may classify, into the first group, a main road corresponding to the forward path information, and a sub road through which another vehicle can enter the main road according to a preset criterion, among those roads (Lee par. 504). The processor 830 may determine validity of the object based on the classified roads (S1530) (Lee par. 506).
Regarding claim 15, the combination of Lee, Takii et al., Yu and Wang et al. disclose The obstacle detection method according to claim 9, wherein, the determiner anticipates that the vehicle will enter the joining point, and the changing of the sensing range includes longitudinally extending, by the controller, the sensing range of the sensing unit with respect to the target road to be joined.
As shown in FIG. 3, when the automated driving control device 100 travels in the lane L3 and plans to change lanes to the lane L4, the automated driving control device 100 recognizes the end TS of the zebra zone and the end TE of the zebra zone, and recognizes the target area TA based on the recognition result. Then, the automated driving control device 100 generates a plan for entering the lane L4 from the lane L3 in the target area TA, and allows the vehicle M to enter the lane L4 from the lane L3 based on the generated plan (Yu par. 69). In the present embodiment, when the vehicle M has failed to recognize the end TE of the zebra zone S2 as described above, the determiner 142 determines the steering angle control mode for searching for the end TE (target) of the zebra zone S2 based on the recognition result of the second recognizer 136 (the second self-position and the orientation of the vehicle M). Then, the automated driving control device 100 controls the steering based on the determined steering angle control mode. The process of searching for the end TE of the zebra zone S2 will be described below (see FIG. 8 to FIG. 11) (Yu par. 71). The second recognizer 136 recognizes intersections of the assumed virtual line IL and the respective road division lines D1 to D4, and derives angles formed by the virtual line IL and the respective road division lines D1 to D4 (angles formed by the virtual line IL and predetermined road division lines among the road division lines D1 to D4) at the intersections. The second recognizer 136 recognizes an angle 01 formed by the aforementioned process. Then, the second recognizer 136 recognizes that the reference direction of the vehicle M is rotated by θ2 with respect to the road division line (Yu par. 86).
As show in the figures 9-10, the vehicle M has rotated to the angle θ2 to enter lane 4 as show in the figures 3 and 5-11 and examiner interpret the object recognition device 16 as the sensing device that rotate with the same angle as vehicle M turn to the direction toward to the lane 4. It’s obviously the sensing range of vehicle change relative to the change of the vehicle M angle as show in the figures 3 and 5-11. Once the vehicle M change the angle and it’s obviously the latitude and longitude position changing as well. Therefore, the latitude and longitude detection range of the vehicle M will change corresponding to the changing direction and position of the vehicle M.
Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Lee US 20200225678, in view of Takii et al. US 20190248279, in view of Yu US 20210284165, in view of Wang et al. US 20200247412 and further in view of Weksler et al. US 20200353863.
Regarding claim 6, the combination of Lee, Takii et al., Yu and Wang et al. teach all the limitation in the claim 1.
The combination of Lee, Takii et al., Yu and Wang et al. do not explicitly teach The obstacle detection system according to claim 1, wherein: the driving information unit is further configured to calculate moving information of the vehicle; and the controller is further configured to generate a warning signal based on the moving information of the vehicle and the distance from the driving position of the vehicle to the joining point when the sensing unit detects the obstacle within the changed sensing range.
Weksler et al. teach The obstacle detection system according to claim 1, wherein: the driving information unit is further configured to calculate moving information of the vehicle; and the controller is further configured to generate a warning signal based on the moving information of the vehicle and the distance from the driving position of the vehicle to the joining point when the sensing unit detects the obstacle within the changed sensing range. (Weksler et al. US 20200353863 abstract; paragraph [0043]-[0058]; [0060]-[0062]; [0066]; figures 1-5;)
FIG. 2 illustrates a process 200 of generating a driver notification of a PTI (potential traffic impact) condition determined related to a driving maneuver. PTI conditions may include the collision with a secondary vehicle, collisions with road signs, barriers, or stop lights, engaging a tire with curb or off-road, or the like. Driving maneuvers may include U-turns, left turns, right turns, merging into traffic from an on-ramp, or the like (Weksler et al. par. 44). In yet another example, when a determination is made that merging into traffic is the driving maneuver, the one or more processors again, may use optical systems, radar, or LIDAR to determine a PTI condition related to the maneuver, including but not limited to speed, distance, position, and acceleration of objects such as secondary vehicles on a roadway or on ramp in proximity to the principle vehicle. Consequently, the input device may provide inputs related to behind, and to the side, of the principle vehicle in order to make determination related to the best manner to merge into traffic. Based on the DIA data and/or TMR data, the PTI condition may be determined (Weksler et al. par. 52). If at 210, a determination is made that the probability for impact is above a threshold, or in at a high risk level, at 212, the one or more processors generate a driver notification of the PTI condition, and specifically a notification not to make the driving maneuver. In one example, when a determination is made based on the DIA data and/or TMR data that a greater than 50% chance of a collision may occur, a display within the principle vehicle may display the word “STOP” or “NO TURN”. In another example, these words are displayed with a red background, or a flashing background. Alternatively or additionally, a voice command is provided that states “NO TURN”, or “NO MERGE”. Tactile or haptic feedback, including vibration of the steering wheel or seat of the driver may also be utilized as a notification (Weksler et al. par. 54).
Therefore, It would have been obviously to one of ordinary skill in the art before the effective filing date of the claim inventio to combine Lee, Takii et al., Yu and Wang et al. with Weksler et al. by comprising the teaching of Weksler et al. into the system of Lee, Takii et al., Yu and Wang et al.. The motivation to combine these arts is to provide a notification of a potential traffic impact from Weksler et al. reference into Lee, Takii et al., Yu and Wang et al. reference for the user can safely maneuver the vehicle from one road to another road like intersection or merging ramp.
Regarding claim 14, the combination of Lee, Takii et al., Yu, Wang et al. and Weksler et al. disclose The obstacle detection method according to claim 11, further comprising: calculating, by the driving information unit, moving information of the vehicle; calculating, by the determiner, a distance from a driving position of the vehicle to the joining point based on the calculated information about the driving road of the vehicle and the calculated driving position information; and generating, by the controller, a warning signal based on the calculated moving information of the vehicle and the calculated distance from the driving position of the vehicle to the joining point in response to sensing, by the sensing unit, the obstacle within the changed sensing range.
FIG. 2 illustrates a process 200 of generating a driver notification of a PTI (potential traffic impact) condition determined related to a driving maneuver. PTI conditions may include the collision with a secondary vehicle, collisions with road signs, barriers, or stop lights, engaging a tire with curb or off-road, or the like. Driving maneuvers may include U-turns, left turns, right turns, merging into traffic from an on-ramp, or the like (Weksler et al. par. 44). In yet another example, when a determination is made that merging into traffic is the driving maneuver, the one or more processors again, may use optical systems, radar, or LIDAR to determine a PTI condition related to the maneuver, including but not limited to speed, distance, position, and acceleration of objects such as secondary vehicles on a roadway or on ramp in proximity to the principle vehicle. Consequently, the input device may provide inputs related to behind, and to the side, of the principle vehicle in order to make determination related to the best manner to merge into traffic. Based on the DIA data and/or TMR data, the PTI condition may be determined (Weksler et al. par. 52). If at 210, a determination is made that the probability for impact is above a threshold, or in at a high risk level, at 212, the one or more processors generate a driver notification of the PTI condition, and specifically a notification not to make the driving maneuver. In one example, when a determination is made based on the DIA data and/or TMR data that a greater than 50% chance of a collision may occur, a display within the principle vehicle may display the word “STOP” or “NO TURN”. In another example, these words are displayed with a red background, or a flashing background. Alternatively or additionally, a voice command is provided that states “NO TURN”, or “NO MERGE”. Tactile or haptic feedback, including vibration of the steering wheel or seat of the driver may also be utilized as a notification (Weksler et al. par. 54).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Lee US 20200225678, in view of Takii et al. US 20190248279, in view of Yu US 20210284165, in view of Wang et al. US 20200247412 and further in view of Fujimoto et al. US 20150362326.
Regarding claim 7, the combination of Lee, Takii et al., Yu and Wang et al. teach all the limitation in the claim 1.
The combination of Lee, Takii et al., Yu and Wang et al. do not explicitly teach The obstacle detection system according to claim1, wherein: the driving information unit is further configured to calculate information about the driving road of the vehicle and the information about the target road to be joined; and the determiner is configured to anticipate whether or not the vehicle will enter the joining point by comparing a width of the driving road and a width of the target road to be joined, calculated by the driving information unit, with each other.
Fujimoto et al. teach The obstacle detection system according to claim1, wherein: the driving information unit is further configured to calculate information about the driving road of the vehicle and the information about the target road to be joined; and the determiner is configured to anticipate whether or not the vehicle will enter the joining point by comparing a width of the driving road and a width of the target road to be joined, calculated by the driving information unit, with each other. (Fujimoto et al. US 20150362326 abstract; paragraph [0034]-[0038]; [0055]; [0057]-[0061]; [0070]-[0072]; [0081]; figures 1-21;)
In the branch determination processing, minute changes in the position and changes in the traveling direction of the vehicle are separately accumulated during the vehicle's traveling in the section (branch determination section) of a predetermined length including the connection point of the currently traveled road R1 and the road R2, and on the basis of the respective accumulation results, the determination whether the vehicle keeps traveling on the road R1 or the vehicle has entered the road R2 is provided. Note that the predetermined length may be variable depending on the road types, the road width, the number of traffic lanes, and the vehicle speed or the predetermined length may be set at a fixed value (Fujimoto et al. par. 34). The limitation (1) is intended for, for example, the relation between the main road and the side road. In general, the vehicle is supposed to decelerate short of the connection site at the time of entry into the side road from the main road (in particular, this tendency is intensified in a case where the side road has a small width (where the side road is a narrow street)). That is, while the vehicle travels at a high speed, the probability is extremely low that the vehicle will enter the side road, and thus, there is no need to perform the road shape conversion processing (Fujimoto et al. par. 72).
Therefore, It would have been obviously to one of ordinary skill in the art before the effective filing date of the claim inventio to combine Lee, Takii et al., Yu and Wang et al. with Fujimoto et al. by comprising the teaching of Fujimoto et al. into the system of Lee, Takii et al., Yu and Wang et al.. The motivation to combine these arts is to obtain the width of the road that vehicle will merging into compare with the width of the current road from Fujimoto et al. reference into Lee, Takii et al., Yu and Wang et al. reference for the user can determine maintain the speed or reduce the speed according the width of the new merging road to enhance the safety maneuver.
Response to Arguments
Applicant's arguments filed 11/26/2025 have been fully considered but they are not persuasive. In the remark applicant argument in the remark:
Applicant argument: First, applicant argues that Lee, Takii, Wang, Yu, Weksler, Fujimoto taken individual or combined, fail to teach or suggest “extending a lateral width of the sensing range based on the lane width of the target road to be joined” as mention in the claim 1 and 9. Second, regarding non-statutory double patenting, applicant argues the US Patent 11941981 failed teach or suggest “changing, by a controller, a sensing range of a sensing unit to position the sensing range on the target road and detect an obstacle moving on the target road to be joined in response to the determiner anticipating that the vehicle will enter the joining point, and calculating, by the driving information unit, an angle of entry between the driving road of the vehicle and the target road to be joined based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein, the changing of the sensing range includes rotating, by the controller, the sensing range of the sensing unit based on the calculated angle of entry between the driving road of the vehicle and the target road to be joined so that the sensing range is to be located on the target road” as mention in the independent claims 1 and 9.
Examiner response: First, examiner respectfully submit that Lee, Takii, Wang, Yu, Weksler, Fujimoto do teach “extending a lateral width of the sensing range based on the lane width of the target road to be joined” as mention in the claims 1 and 9. Here is the following reason that examiner believe arts of record are teaching above limitation: Paragraph 500 of Lee reference teach the sensors can sense more on the left side of a road-merge direction in ramp section where the road locate on the left side as illustrate in the figure 20B and the shape or the size of the predetermine range may vary according to characteristics of a road located at the position of the vehicle. Paragraph 504 of Lee reference disclose the processor for classify the main road corresponding to the forward path information, and a sub road through which another vehicle can enter the main road according to a preset criterion, among those road. Paragraph 69 of Yu reference teach the driving control device generated the plan for vehicle moving from road L3 entering to the target road L4 to be joined as illustrated in the figure 3. Paragraph 86 and figures 9-10 of Yu reference show the vehicle M to rotate to the angle 92 with respect to the road division line. Paragraphs 64 and 94-95 and figures 9-11 of Wang et al. reference clearly teach the system determine the distance between the vehicle position to the merging point as disclose in the paragraph 64 and paragraphs 94-95 clearly provide the instruction include a safety distance for driver to merge. It's obviously the sensing device equip in the vehicle will rotate a long with the vehicle and the vehicle sensing range obviously will changing as the vehicle rotate. In this scenario, the vehicle M will shifting the sensing range to the road L4 as illustrated in the figure 3 of Yu reference. Since arts of the record still read on the claim, therefore the rejection stand. Please see the figure 20B of Lee reference, figure 3 of Yu reference, figure 4 of Takii et al. reference and figures 10-11 of Wang et al. reference for support of the merging point which similar to the figure 2 of the specification.
Second, regarding the non-statutory double patenting, examiner maintain to rejection with art of record. Applicant argues the patent claims fail to teach or suggest “changing, by a controller, a sensing range of a sensing unit to position the sensing range on the target road and detect an obstacle moving on the target road to be joined in response to the determiner anticipating that the vehicle will enter the joining point, and calculating, by the driving information unit, an angle of entry between the driving road of the vehicle and the target road to be joined based on the calculated information about the driving road of the vehicle and the calculated information about the target road to be joined, wherein, the changing of the sensing range includes rotating, by the controller, the sensing range of the sensing unit based on the calculated angle of entry between the driving road of the vehicle and the target road to be joined so that the sensing range is to be located on the target road”. Examiner, respectfully disagree due to the following reason: Both of the patent claims and the instant application claims are teaching the same concept. Also figure 2 col. 2 lines 10-34 of the US Patent 11941981 are clearly disclose the sensing range extend on a lateral width of the sensing range base on the lane width of the target road to be joined. Please see the non-statutory double patenting table above for the mapping claim between the instant application and the paten claim. Since arts of record still read on the claim invention, therefore the rejection stand.
PNG
media_image2.png
554
512
media_image2.png
Greyscale
PNG
media_image3.png
564
806
media_image3.png
Greyscale
PNG
media_image4.png
8
7
media_image4.png
Greyscale
PNG
media_image5.png
388
580
media_image5.png
Greyscale
PNG
media_image6.png
584
808
media_image6.png
Greyscale
PNG
media_image7.png
586
790
media_image7.png
Greyscale
PNG
media_image8.png
842
604
media_image8.png
Greyscale
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THANG D TRAN whose telephone number is (408)918-7546. The examiner can normally be reached Monday - Friday 8:00 am - 5:30 pm (pacific time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian A Zimmerman can be reached at 571-272-3059. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THANG D TRAN/Examiner, Art Unit 2686
/BRIAN A ZIMMERMAN/Supervisory Patent Examiner, Art Unit 2686