Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1, 11 and 13 are amended.
Claims 2, 4-5, 7-10, 14, 16, 17, 20, and 22-23 were previously canceled.
Claims 1, 3, 6, 11-13, 15, 18 and 21 are pending.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application CN201911066406.9, filed on 11/07/2019.
Response to Arguments/Remarks
Applicant’s arguments with respect to claims 1, 3, 6, 11-13, 15, 18 and 21 have been considered but are moot in view of the new ground(s) of rejection as necessitated by applicant's amendments.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3,6, 7, 11-13, 15, 18, 19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Jie Lu [CN110231040, now Lu], in view of Xu et al. [CN107154160, now XU], further in view of Yang et l. [CN109784526, now Yang], in view of Chidlovskii et al. [US20160070986, now Chidlovskii], further in view of Xiao Zhang et al. [CN109242214, now Zhang ]. (Note that the Lu, Xu, and Yang references were included in a previous action.)
Claim 1
Lu discloses a vehicle scheduling method performed by a vehicle scheduling apparatus [see at least Lu, abstract (“scheduling”); under background [back] ¶ 002 (“vehicle scheduling system”)] and
determining path cost and path value [see at least Lu, abstract (“path cost”); under summary ¶ 001; 002’ 006-0010 (“path cost… Optionally, the determining whether there is congestion region comprises: determining whether there is closed-loop, if exists, then determining the nodes on the closed loop to form a closed-link point set; in the closed link point set. determining the minimum value Xmin of the node coordinate and the maximum value xmax, the minimum value of the coordinate ymin and maximum value ymax; of the x belongs to [xmin -α, xmm + α], y belongs to [ymin -α, + α] ymax in the region range as the congestion region. wherein, α is an integer greater than zero.aa”)];
setting a second path cost for a path reaching each target position according to the priority of each target position, wherein the higher the priority, the smaller the second path cost configured for the path reaching the corresponding target position [see at least Lu, Summary ¶ 001 (“vehicle view, an embodiment of the present invention provides a method for path planning, path cost by changing the congestion region boundary node, make the vehicle affected by the re-planned path, and scheduling the subsequent advance to the congestion region of removing set packet latency point of non-congestion region waiting. when the congestion region no longer congestion, re-dispatching vehicle set comprises waiting point to the original destination, so it can prevent the bag falling opening by sorting area portion coated amount centralized explosion caused by congestion, when there is congestion region can prevent the new vehicle added congestion region. accelerate relieving congestion degree of congestion area, so as to greatly reduce manual intervention, and improves the efficiency of the system.”); 004 (“Optionally, the determining whether there is congestion region comprises: determining whether there is closed-loop, if exists, then determining the nodes on the closed loop to form a closed-link point set; in the closed link point set. determining the minimum value Xmin of the node coordinate and the maximum value xmax, the minimum value of the coordinate ymin and maximum value ymax; of the x belongs to [xmin -α, xmm + α], y belongs to [ymin -α, + α] ymax in the region range as the congestion region. wherein, α is an integer greater than zero.”)]; and
sending the optimal planning path to the target vehicle, and scheduling the target vehicle to perform a task, wherein the task comprises a sorting task or a parking task or a charging task. [see at least Lu, under background ¶ 002 (“service logic operation of the vehicle scheduling system sends the task, each task comprises a path of a start point, an end point and a connection starting point, the starting point is the current position of the vehicle, the end point is the task arriving in the vehicle node, sorting has been completed, temporarily, waiting, queue, charging and so on. when executing one mobile task vehicle, according to the path planning by the vehicle by the software lock point to lock a section of path of vehicle, the path between node vehicle can only use nodes and to lock the locked to, at the same time, vehicle point further needs to meet a series of condition: the vehicle point will dynamically lock/unlock the vehicle lock node can not be used to other vehicles, vehicle lock with number upper limit, but not node-turning vehicle in the planned path. by the vehicle lock, which can prevent a plurality of vehicles at the same time using the same node caused by the collision, side collision or rear-end collision.”); 004 (“For sorting vehicle task scheduling system, the vehicle function is divided into the sorting, queuing and sorting returned charging. when the driving path crossing map layout, avoids the task of the vehicle as much as possible, the map is divided into three functional regions, namely sorting area, a queue area and a charging area. wherein, the queue area is only a path can reach the single queuing by setting path direction, it can avoid the vehicle collides and the re-planned path. setting two bidirectional road, for queuing area vehicle charging and returning charging area. sorting area maximum, selectable paths to the same destination of typically tens of strip. “); under summary ¶ 004 (“value”); 001; 002; 004-008 (“ 006-010 (“cost…value….determining path…new planning route”); under specific ¶ 015 “optimization algorithm”)].
Lu does not specifically disclose but Xu does teach receiving image information of each area for sorting from an image acquisition device [see at least Xu, abstract; under Technical field [TechF] ¶ 005 (“image collection”)];
recognizing each vehicle from the image information according to a vehicle recognition model, wherein the vehicle recognition model is a neural network model [see at least Xu, TechF ¶ 005-007 (“then traffic end server calls on each path measuring device and image collecting camera head data, according to the travel condition data whether the path data and image collecting camera head of measuring device on each path, and selecting a traffic state in the best path as the preferred path. finally the traffic end server sending the optimized routes data to medical end server and the preferred path by medical end server data to the ambulance, ambulance vehicle-mounted MCU receiving preferred route data through the vehicle wireless module, display screen GPS map generated on the vehicle by vehicle-mounted MCU and the preferred path by vehicle-mounted MCU data input to the GPS map to generate optimal route navigation on the GPS map; real-time GPS data an ambulance in the driving process of vehicle-mounted MCU control vehicle GPS module to the vehicle end server sending the ambulance and traffic end server invokes the real-time data measuring device and image collecting camera head on the preferred path. and according to the real-time data of the measuring device on the preferred path and traffic state of the image collecting camera head of real-time data judging preferable route, if the preferred path in the ambulance during congestion, the traffic end server according to the real time GPS data sent by ambulance vehicle GPS module and help address information. a plurality of update path re-planning the ambulance to resorting people and selecting traffic state best updated path as optimum path update, and finally the traffic end server updating the optimal path data to the medical end server, and the medical end server updating the optimal path data to the ambulance vehicle-mounted MCU. An ambulance fast traffic guiding system, wherein the traffic end server calls the speed measuring device and image collecting camera head on the path data, wherein: traffic end server according to the obtained data path of the measuring device on a plurality of vehicle speed, and the sum of a plurality of vehicle speed obtaining the average speed of the vehicle on the path divided by the number of vehicles, the average speed of the vehicle as the passing speed of the path, the traffic end server according to the data collected by the image collecting camera of, obtaining vehicle density data on the path based on the image processing method, the traffic end server judging the path passing speed is greater than the preset threshold and the best preferred path for traffic state path of vehicle density data is less than preset threshold value.”)].
Note: The examiner is using the BRI of the claims and the definition of a neural network is “An interconnected system inspired by the arrangement of neurons in the nervous system; a program, configuration of microprocessors, etc., designed to simulate this.” [Oxford English Dictionary, s.v. “neural network (n.),”March 2024, https://doi.org/10.1093/OED/6957795593”], Thus includes modeling or machine learning.
counting a number of vehicles in each area; determining vehicle density information in each area according to the number of vehicles in each area [see at least Xu, TechF ¶ 005-007];
configuring a first path cost corresponding to a path of each area according to the vehicle density information in each area [see at least Xu, under description [descript] ¶ 003 (“method to obtain the path the vehicle density data, the traffic end server judging the path passing speed is greater than the preset threshold and the best preferred path for traffic state path of vehicle density data is less than preset threshold value. the predetermined path judging threshold value if the traffic end server memory at the plurality of paths passing speed is greater than the preset threshold and the vehicle density data is less than the select vehicle density data of the minimal path is a preferred path.”)].
calculating a path value corresponding to each planning path among a plurality of planning paths from a starting position to at least one a plurality of target positions for a target vehicle, according to the first path cost configured for the path of each area and the second path cost set for the path reaching each target position [see at least Zu, TechF ¶ 006-007(“path threshold”); Descrip 001; (“003 (“path threshold”)]; ; and
determining an optimal planning path of the target vehicle by taking a minimum path value as a target [see at least Xu, TechF ¶ 006-007(“path threshold”); Descrip 001; (“003 (“path threshold”)].
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, specific route planning features of Lu, with the traffic guide system of Xu with the more specific route planning features of Lu. Thus, providing a more effective and efficient technique to determine the best cost and value, and provide scheduling using the optimal route for vehicles.
Yang more specifically teaches a plurality of target positions for a target [see at least Yang, Step 302a (“according to the current road environment information determined starting from the current position to the target position of a plurality of current passing area.”); Step 302b (“The sequentially arranged starting from the current position to the target position of the plurality of the current passing region determining at least one current passing area.”)] .
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, specific route planning features of Lu, with the traffic guide system of Xu with the more specific route planning features of Lu, further with the path planning using targets of Yang. Thus, providing a more effective and efficient technique to determine the best cost and value, and provide scheduling using the optimal route for vehicles.
Neither Xu, Lu or Yang disclose or teach but Chidlovskii does more specifically teach recognizing each vehicle from the image information according to a vehicle recognition model, wherein the vehicle recognition model is a neural network model [see at least Chidlovskii, ¶ 0031 (“To provide a more generalized formalism (not limited to the illustrative example of vehicle labeling), a domain is composed of a feature space X, and a marginal probability distribution P(X), where X={x.sub.1, . . . , x.sub.n}, and x.sub.i?X. That is, :=(X;P(X)). A task T is defined by a label space Y, and a function ?:X.fwdarw.Y. Learning the task T for the domain , in a machine learning context, amounts to estimating a classifier function {tilde over (?)}:X.fwdarw.Y, from a given training data set D={(x.sub.1;y.sub.1), . . . , (x.sub.n;y.sub.n)}, where again x.sub.i ?X and y.sub.i?Y, that best approximates ?, according to certain criteria.0034 (“direct comparison of source and target samples in their respective subspaces without unnecessary data projections.”); 0035 (“ An example of an alignment process that transforms the feature vectors of the target and source training data sets to a (generally different) common domain space is described in Fernando et al., “Unsupervised visual domain adaptation using subspace alignment”, in ICCV (2013). The motivation for this approach is that, since source and target domains are drawn using different marginal distributions, there might exist subspaces in source and target domains which are more robust representations of the source and target domains and where the shift between these two domains can be learned. In this illustrative domain alignment approach, Principal Component Analysis (PCA) is used to select in both target and source domains d eigenvectors corresponding to the d largest eigenvalues. These eigenvectors are used as bases of the source and target subspaces, respectively denoted by S.sub.s and S.sub.d, where S.sub.s,S.sub.d?R.sup.D?d. The subspaces S.sub.s and S.sub.s are orthonormal, S.sub.sS.sub.s′=I.sup.d and S.sub.tS.sub.t′=I.sup.d, where I.sub.d is the identity matrix of size d, and S.sub.s and S.sub.t are used to learn the shift between the two domains. A linear transformation is used to align the source subspaces to the target one. This step allows direct comparison of source and target samples in their respective subspaces without unnecessary data projections. A subspace alignment approach is suitably used to achieve this task. Basis vectors are aligned by using a transformation matrix M from S.sub.s to S.sub.t. M is learned by minimizing the following Bregman matrix divergence: F(M)=?S.sub.sM?S.sub.t?.sub.F.sup.2, where ?•?.sub.2.sup.F denotes Frobenius norm. Since this norm is invariant to orthonormal operation, it can be rewritten as follows:”)];
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, specific route planning features of Lu, with the traffic guide system of Xu with the more specific route planning features of Lu, with the path planning using targets of Yang, further with the image labeling and classification of Chidlovskii. Thus, providing a more effective and efficient technique to determine the best cost and value, and provide scheduling using the optimal route for vehicles.
Neither Xu, Lu, Yang or Chidlovskii specifically disclose/teach but Zhang does teach wherein the plurality of target positions are all unloading spots, parking places, or charging places [see at least Zhang, ¶ 143 (“a minimum value of the path as the optimal path from the path set to be selected is deleted, added to the plurality of optimal paths integrated represented in the optimized route set.”);
Zhang also more specifically teaches determining an optimal planning path among the plurality of planning paths of the target vehicle by taking a minimum path value as a target, and taking the target position that the optimal planning path reaches as an optimal target position among the plurality of target positions [see at least Zhang, ¶ 3 (“traversing the shortest route of all users point to obtain optimal route result by ant colony optimization method, the method for route planning under the fixed path target.”); 33-34 (“A distribution route planning device, comprising a position obtaining module, delivery route calculation module, a distribution route updating module, wherein: (34) the position obtaining module and the current locating place and distribution is used for obtaining the current distribution of all customer orders of the location and distribution of the initial locating and obtaining new order”); 53 (“comprises a position updating module, a route set updating module updating calculation module and updating route module “); 143].
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, specific route planning features of Lu, with the traffic guide system of Xu with the more specific route planning features of Lu, with the path planning using targets of Yang, with the image labeling and classification of Chidlovskii, further with the ability of Zhang to update and determine the optimal route for a delivery. Thus, providing a more effective and efficient technique to determine the best cost and value, and provide scheduling using the optimal route for vehicles.
Claim 3
Lu, Zu, Yang, Chidlovskii and Zhang disclose and/or teach the method of Claim 1.
Neither Lu, Zu or Yang disclose or teach but Chidlovskii does more specifically teach acquiring a sample image [see at least Chidlovskii, ¶ 0034 (“direct comparison of source and target samples in their respective subspaces without unnecessary data projections.”); 0035];
labeling a vehicle in the sample image [see at least Chidlovskii, Abstract, ¶ 0001-0002 (“[0001] The following relates to the image labeling arts, camera-based object labeling arts, and to applications of same such as vehicle labeling and so forth. [0002] Camera-based vehicle labeling (or classification) using a still camera or video camera has diverse applications, such as in: automated or semi-automated toll assessment for toll roads, bridges, parking, or so forth (where, for example, the toll may depend on the number of wheel axles, or the vehicle type, e.g. trucks may pay a higher toll than cars); automated monitoring of a parking facility (e.g., detecting whether or not a vehicle is in a parking spot—this actually labels the parking spot, rather than the vehicle); camera based enforcement of speed limits or other traffic regulations (where the vehicle is labeled as to its speed, or as to whether it has run a red light); monitoring of carpool lanes (where the vehicle is labeled by number of occupants); roadway usage studies (where vehicles may be classified as to their state or country of registration based on their license plates); and so forth. Depending upon the type of vehicle labeling to be performed, the vehicle image that is used for the automated vehicle labeling may be an image of the entire vehicle, or an image of a portion of the vehicle, such as the rear license plate.”)]; and
training a vehicle recognition model by using the labeled image, to recognize the vehicle in the image information according to the trained vehicle recognition model [see at least Chidlovskii, ¶ 0003 (“In a common installation approach, the camera is mounted so as to have a suitable view of the toll booth entrance, roadway, parking lot entrance, or other location to be monitored, and a set of training vehicle images are acquired. A human installer manually labels each training image as to the vehicle type. These labeled vehicle images form a labeled training set for the camera installation, which are then used to train a vehicle classifier. The training process typically entails optional pre-processing of the image (for example, in the case of license plate labeling, the pre-processing may include identifying the video frame that optimally shows the rear license plate and then segmenting the frame image to isolate the license plate), generating a quantitative representation, e.g. feature vector, representing (optionally pre-processed) image, and training the classifier to assign labels to the feature vector representations that optimally match the manually assigned labels. Thereafter, during the labeling phase, when the camera acquires an image of a vehicle it is analogously pre-processed and converted to a feature vector which is then run through the trained classifier to label the vehicle.”)].
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, specific route planning features of Lu, with the traffic guide system of Xu with the more specific route planning features of Lu, with the path planning using targets of Yang, further with the image labeling and classification of Chidlovskii. Thus, providing a more effective and efficient technique to determine the best cost and value, and provide scheduling using the optimal route for vehicles.
Claim 6
Lu, Zu, Yang and Chidlovskii disclose and/or teach the method of Claim 1.
Lu further discloses wherein the greater the vehicle density, the greater the first path cost configured for the path of the corresponding area [see at least Lu, summary ¶ 006-007 (“congestion cost”)].
Claim 11
Claim 11 is the apparatus for the method of Claim 1. Claim 11 has similar limitations to claim 1, therefore claim 11 is rejected with the same rationale as claim 1.
Claim 12
Claim 12 has similar limitations to claim 2, therefore claim 12 is rejected with the same rationale as claim 2.
Claim 13
Claim 13 is the Non-transitory computer-readable storage medium for the method of claim 1. Claim 13 has similar limitations to claim 1, therefore claim 13 is rejected with the same rationale as claim 1.
Claim 15
Claim 15 has similar limitations to claim 3, therefore claim 15 is rejected with the same rationale as claim 3.
Claim 18
Claim 18 has similar limitations to claim 6, therefore claim 18 is rejected with the same rationale as claim 6.
Claim 21
Claim 21 has similar limitations to claim 3, therefore claim 15 is rejected with the same rationale as claim 3.
Conclusion
ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Zhang, Liangliang, et al. [US20190317508]
A new cost design is disclosed for evaluating candidate path curves for navigating an autonomous driving vehicle (ADV) through a segment of a route which may include an obstacle. Each point on each candidate path curve has a plurality of attributes having logical values and an associated priority of evaluation, and at least one numeric attribute having an associated priority of evaluation. A cost for each path curve is determined using the attributes and priorities, and a least cost path curve is selected using the attributes and priorities. By comparing attribute values in accordance with priority, and utilizing logical values, the efficiency of determining path curve cost and selecting a least cost path curve is substantially improved. [abstract]
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOAN T GOODBODY whose telephone number is (571) 270-7952. The examiner can normally be reached on M-TH 7-3 (US Eastern time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form.html.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VIVEK KOPPIKAR, can be reached at (571) 272-5109. The Fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspot.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from the USPTO Customer Serie Representative or access to the automated information system, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/JOAN T GOODBODY/
Examiner, Art Unit 3667
(571) 270-7952