RiveDETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-4, 7-8, 14-17, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ueda et al. (US 20190361436 A1) (Hereinafter referred to as Ueda).
Regarding Claim 1, Ueda discloses A vehicle data processing method, comprising: (See [0007], “the present disclosure among methods, devices, . . .”)
obtaining transmission duration for transmitting first driving data from a vehicle to a vehicle control terminal, the first driving data representing a driving condition of the vehicle at a first moment; (See [0102], “Autonomous driving controller 111 transmits the sensed data including the position information of the vehicle, the vehicle-speed information of the vehicle, and the information of the object around the vehicle, which have been selected, to remote control device 50 via network 2 (S103).” Here, Ueda teaches speed and position data (driving data representing a driving condition of the vehicle) and transmitting the sensed data to a remote control device (vehicle control terminal).
Also see [0080], “Communication delay estimator 113 estimates delay time of a communication passage of the first communication system or second communication system. . . For example, communication delay estimator 113 can estimate the delay time from a difference between a transmission time at which a signal is transmitted from autonomous vehicle control device 10, and a receiving time at which the signal is received by remote control device 50.” Note that delay time corresponds to “transmission duration for transmitting first driving data from a vehicle to a vehicle control terminal”.)
calculating, based on the transmission duration and the first driving data, second driving data for representing a driving condition of the vehicle at a second moment, the second moment being later than the first moment; and (See [0159], “Picture analyzer 513 estimates an actual current position of autonomous vehicle 1 based on the received communication delay amount and the vehicle speed of autonomous vehicle 1 (S224). Picture analyzer 513 estimates a position where the vehicle speed (speed per second) multiplied by the communication delay amount is moved in the traveling direction of autonomous vehicle 1 as the current position of autonomous vehicle 1. The traveling direction of autonomous vehicle 1 can be estimated by detecting, for example, a movement vector of the position information sensed by GPS sensor 25.” Here, the estimated actual current position corresponds to “second driving data for representing a driving condition of the vehicle at a second moment”. Note that the estimated actual current position is at a “second moment” which is later than “a first moment” because it was calculated using the sensed data and communication delay, thus implying that it is later than when the vehicle’s data was sensed at the first moment.)
generating, based on the second driving data, a driving image representing the driving condition of the vehicle at the second moment. (See [0167], “Picture generator 511 generates a monitoring picture on which autonomous vehicle 1 and the moving object in the estimated current position as well as a risk range object are superimposed (S226a).” In this case, a monitoring picture at the estimated current position corresponds to “a driving image representing the driving condition of the vehicle at the second moment”.)
Regarding Claim 2, Ueda discloses The method according to claim 1, further comprising: presenting the driving image, so that the vehicle control terminal controls a driving status of the vehicle based on the driving image. (See [0097] teaching generating a monitoring picture based on sensed data.
See [0138], “Upon accepting a drive restarting operation including designation of a traveling route for starting to move at the time of restarting driving, which is carried out by a monitor who watches the monitoring picture displayed on display 54 (Y in S24a), remote control device 50 transmits a drive restarting instruction signal including the traveling route for starting to move to autonomous vehicle control device 10 via network 2 (S25a).” Here, Ueda teaches to present the monitoring picture (driving image) to a monitor who watches and can use the remote control device (vehicle control terminal) to send an instruction signal to the vehicle (control a driving status of the vehicle based on the driving image).
Further see [0349], “Furthermore, the self-driving controlling method has a step of autonomously controlling a drive of the autonomous vehicle (1) based on the acquired sensed data. . . In addition, the self-driving controlling method has a step of receiving an instruction signal from the remote control device (50) via the network (2).”)
Regarding Claim 3, Ueda discloses The method according to claim 1, wherein the first driving data comprises a first position parameter indicating a position of the vehicle at the first moment, a first traffic travel indication parameter indicating a first traveling direction of the vehicle, and a first movement parameter indicating a motion state of the vehicle; (See [0073], “Vehicle-speed sensor 24 detects a speed of autonomous vehicle 1. GPS sensor 25 detects position information of autonomous vehicle 1.”
Also see [0159], “The traveling direction of autonomous vehicle 1 can be estimated by detecting, for example, a movement vector of the position information sensed by GPS sensor 25.” Here, Ueda teaches position information (a first position parameter), traveling direction of autonomous vehicle (first traffic travel indication parameter), and speed (a first movement parameter).)
and the calculating, based on the transmission duration and the first driving data, second driving data for representing a driving condition of the vehicle at a second moment comprises: calculating a position of the vehicle at the second moment based on the transmission duration, the first position parameter, the first traffic travel indication parameter, and the first movement parameter, to obtain a second position parameter; and using the second position parameter as the second driving data. (See [0159], “Picture analyzer 513 estimates an actual current position of autonomous vehicle 1 based on the received communication delay amount and the vehicle speed of autonomous vehicle 1 (S224). Picture analyzer 513 estimates a position where the vehicle speed (speed per second) multiplied by the communication delay amount is moved in the traveling direction of autonomous vehicle 1 as the current position of autonomous vehicle 1.” Once again, in this case, the estimated actual current position corresponds to the position of the vehicle at the second moment.)
Regarding Claim 4, Ueda discloses The method according to claim 3, wherein the calculating a position of the vehicle at the second moment based on the transmission duration, the first position parameter, the first traffic travel indication parameter, and the first movement parameter comprises: determining the first traveling direction of the vehicle based on the first traffic travel indication parameter; (See [0159], “The traveling direction of autonomous vehicle 1 can be estimated by detecting, for example, a movement vector of the position information sensed by GPS sensor 25.”)
calculating, based on the transmission duration and the first movement parameter, a first traveling distance of the vehicle in the first traveling direction; and calculating the second position parameter based on the first position parameter and the first traveling distance of the vehicle in the first traveling direction. (See [0159] teaching calculating the estimated actual current position (second position parameter) which is based on the position information sensed by GPS (first position parameter). Note that although not expressly stated, it would be implicit that the system would be calculating “a first traveling distance” in the first traveling direction. This is because we are estimating a new position of a vehicle from its transmitted position, speed, and communication delay. Thus implicitly, we would have to calculate the distance the vehicle has traveled in that direction to estimates its new position.)
Regarding Claim 7, Ueda discloses The method according to claim 4, wherein the calculating the second position parameter based on the first position parameter and the first traveling distance of the vehicle in the first traveling direction comprises: taking a position coordinate represented by the first position parameter as a starting point, and determining, as the second position parameter, a position coordinate which is away from the starting point by the first traveling distance in the first traveling direction. (See [0073], “Vehicle-speed sensor 24 detects a speed of autonomous vehicle 1. GPS sensor 25 detects position information of autonomous vehicle 1.”
See [0159], “Picture analyzer 513 estimates an actual current position of autonomous vehicle 1 based on the received communication delay amount and the vehicle speed of autonomous vehicle 1 (S224). Picture analyzer 513 estimates a position where the vehicle speed (speed per second) multiplied by the communication delay amount is moved in the traveling direction of autonomous vehicle 1 as the current position of autonomous vehicle 1.”
Note that it would be reasonable to assume that the sensed position information (a first position parameter) is away from the estimated actual current position (second position parameter) given the scenario of a vehicle in motion. Also once again, although Ueda doesn’t explicitly state to calculate distance, it is obviously implied that a distance calculation happens.)
Regarding Claim 8, Ueda discloses The method according to claim 3, wherein the first driving data further comprises an environmental parameter of the vehicle at the first moment, and the environmental parameter comprises a third position parameter for indicating a position of a mobile object around the vehicle at the first moment, (See [0158], “Picture analyzer 513 detects a moving object from each frame of the received picture data (S222). Picture analyzer 513 searches the frames using an identifier of a moving object registered in advance so as to be recognized as an obstacle, and detects the moving object. Picture analyzer 513 estimates the moving speed of the moving object detected in the frames of the picture data (S223). Picture analyzer 513 detects a difference between a position of the moving object detected in the current frame and a position of the moving object detected in the past frame to detect a movement vector of the moving object.” In this case, a moving object can be considered as “an environmental parameter” and the position of the moving object which corresponds to “a third position parameter for indicating a position of a mobile object around the vehicle at the first moment”.)
a second traffic travel indication parameter indicating a traveling direction of the mobile object, and a second movement parameter indicating a motion state of the mobile object; (See [0104], “Furthermore, detecting a movement vector of an object allows the traveling direction of each object to be specified.” Thus, Ueda teaches the traveling direction of each object (second traffic travel indication parameter) and a movement vector of an object (motion state of the object).)
and the calculating, based on the transmission duration and the first driving data, second driving data for representing a driving condition of the vehicle at a second moment further comprises: calculating a position of the mobile object at the second moment based on the transmission duration, the third position parameter, the second traffic travel indication parameter, and the second movement parameter, to obtain a fourth position parameter; and (See Ueda [0160], “Picture analyzer 513 estimates an actual current position of the moving object based on the received communication delay amount and the estimated moving speed of a moving object (S225). Picture analyzer 513 estimates a position where the vehicle speed (speed per second) multiplied by the communication delay amount is shifted in the traveling direction of the moving object as a current position of the moving object.” Here, the estimated actual current position of the moving object corresponds to “position of the mobile object at the second moment” or “fourth position parameter” which takes into account the communication delay amount or “transmission duration”. Although not explicitly stated, Ueda would be taking into account the object’s starting position or “third position parameter” and the object’s traveling direction “second traffic travel indication parameter”.)
using the fourth position parameter and the second position parameter as the second driving data. (See [0161], “Picture generator 511 generates a monitoring picture on which autonomous vehicle 1 and the moving object at the respective estimated current positions are superimposed (S226).”)
Regarding Claim 14, Ueda discloses A vehicle data processing apparatus, comprising: one or more processors; and a memory, configured to store one or more programs, the one or more programs, when executed by the one or more processors, causing the one or more processors to perform: (See [0074], “Autonomous vehicle control device 10 includes controller 11, storage 12 and input/output unit 13. . . The functions of controller 11 can be implemented by cooperation of a hardware resource and a software resource, or by only a hardware resource. As the hardware resource, a processor . . .” See [0075], “Storage 12 includes, for example, HDD (Hard Disk Drive), and/or SSD (Solid-State Drive).” See [0092], “As the software resource, programs such as an operating system and application can be utilized.”)
obtaining transmission duration for transmitting first driving data from a vehicle to a vehicle control terminal, the first driving data representing a driving condition of the vehicle at a first moment; calculating, based on the transmission duration and the first driving data, second driving data for representing a driving condition of the vehicle at a second moment, the second moment being later than the first moment; and generating, based on the second driving data, a driving image representing the driving condition of the vehicle at the second moment. (The above limitations are similar to those of Claim 1, and is therefore rejected under a similar rationale as that of Claim 1.)
Regarding Claim 15, Claim 15 is similar to that of Claim 2 and is therefore rejected under a similar rationale as that of Claim 2.
Regarding Claim 16, Claim 16 is similar to that of Claim 3 and is therefore rejected under a similar rationale as that of Claim 3.
Regarding Claim 17, Claim 17 is similar to that of Claim 4 and is therefore rejected under a similar rationale as that of Claim 4.
Regarding Claim 20, Ueda discloses A non-transitory computer-readable storage medium, having a computer program stored thereon, the computer program, when executed by at least one processor, causing the at least one processor to perform: (See [0075], “Storage 12 includes, for example, HDD (Hard Disk Drive), and/or SSD (Solid-State Drive).” See [0092], “As the software resource, programs such as an operating system and application can be utilized.” Note that storage such as HDD and SDD correspond to non-transitory computer-readable storage medium.)
obtaining transmission duration for transmitting first driving data from a vehicle to a vehicle control terminal, the first driving data representing a driving condition of the vehicle at a first moment; calculating, based on the transmission duration and the first driving data, second driving data for representing a driving condition of the vehicle at a second moment, the second moment being later than the first moment; and generating, based on the second driving data, a driving image representing the driving condition of the vehicle at the second moment. (The above limitations are similar to those of Claim 1, and is therefore rejected under a similar rationale as that of Claim 1.)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 5 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ueda in view of Trepagnier et al. (CN 104133473 A) (Hereinafter referred to as Trepagnier).
Regarding Claim 5, Ueda discloses The method according to claim 4, . . . and the determining a first traveling direction of the vehicle based on the first traffic travel indication parameter comprises: analyzing the first traveling direction based on a movement vector. (See [0159], “The traveling direction of autonomous vehicle 1 can be estimated by detecting, for example, a movement vector of the position information sensed by GPS sensor 25.”)
However, Ueda fails to explicitly disclose The method according to claim 4, wherein the first traffic travel indication parameter comprises a heading angle; and the determining a first traveling direction of the vehicle based on the first traffic travel indication parameter comprises: analyzing the first traveling direction based on analyzing the first traveling direction based on the heading angle.
Trepagnier teaches wherein the first traffic travel indication parameter comprises a heading angle; (See [0054], “As shown in FIG. 2, the processor 24 transmits the real-time locating device position, heading, altitude, and speed of the vehicle 25 to the processor 24 and multiple times per second”
Also see [0152], “wherein x, y are the read point in the whole frame coordinates, theta is yaw angle, P is front heading angle, Vf is the front wheel speed. a rear wheel speed is K = eight-bit cos (furnace).”)
analyzing the first traveling direction based on analyzing the first traveling direction based on the heading angle. (See [0054] and [0152] teaching transmitting the heading angle data. In combination with Ueda [0159] teaching to estimate a travel direction using a movement vector, then then the above limitation is taught. Note that a vector implies a both a magnitude and direction, and in this case, the direction can be based on the heading angle taught by Trepagnier.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ueda with Trepagnier to include using a heading angle to determine the direction of the vehicle.
The motivation to combine Ueda with Trepagnier would have been obvious as both arts are within the same field of processing vehicle data (see Trepagnier abstract). Trepagnier is simply teaching that the heading angle is a common data point to consider. The benefit of using a heading angle, is that it is a known and common way to determine to the traveling direction of the vehicle (See Trepagnier [0023]).
Regarding Claim 18, Claim 18 is similar to that of Claim 5 and is therefore rejected under a similar rationale as that of Claim 5.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Ueda in view of Yu et al. (US 20200364928 A1) (Hereinafter referred to as Yu).
Regarding Claim 10, Ueda discloses The method according to claim 1, wherein the second driving data comprises a second position parameter of the vehicle at the second moment and a fourth position parameter of a mobile object around the vehicle at the second moment; (See [0160], “Picture analyzer 513 estimates an actual current position of the moving object based on the received communication delay amount and the estimated moving speed of a moving object (S225).” Note that the estimated actual current position of the moving object corresponds to the “fourth position parameter of a mobile object around the vehicle at the second moment”.)
and the generating, based on the second driving data, a driving image representing the driving condition of the vehicle at the second moment comprises: the driving image of the vehicle at the second moment based on the second position parameter and the fourth position parameter to obtain the driving image. (See [0161], “Picture generator 511 generates a monitoring picture on which autonomous vehicle 1 and the moving object at the respective estimated current positions are superimposed (S226).”)
However, Ueda fails to explicitly disclose and the generating, based on the second driving data, a driving image representing the driving condition of the vehicle at the second moment comprises: performing three-dimensional reconstruction on the driving image of the vehicle at the second moment based on the second position parameter and the fourth position parameter to obtain the driving image.
Yu teaches performing three-dimensional reconstruction on the driving image of the vehicle at the second moment based on the second position parameter and the fourth position parameter to obtain the driving image. (See Abstract, “An example apparatus includes a 3D scene generator to generate a 3D model for digital image scene reconstruction based on a trained generative model and a digital image captured in a real environment.” In this case, Yu is an art that teaches the well-known technique of performing 3D reconstruction on an image. In combination with Ueda already teaching a monitoring picture “driving image”, than performing 3D reconstruction on that image would read on the above claim limitations.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ueda with Yu to include performing 3D reconstruction on the driving image.
The motivation to combine Ueda with Yu would have been obvious as the benefit of 3D reconstruction is that it provides more depth and realism of the scene. Yu [0002] explicitly teaches that, “3D reconstruction has benefits in many different fields such as, for example, surveying, mapping, medical imaging, 3D printing, virtual reality, robotics, etc.” and vehicle data processing would obviously be a field that benefits from 3D reconstruction as well.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Ueda in view of Yu and in further view Liu et al. (US 10966069 B1) (Hereinafter referred to as Liu).
Regarding Claim 11, Ueda in view of Yu disclose The method according to claim 10, wherein the performing three-dimensional reconstruction on the driving image of the vehicle at the second moment based on the second position parameter and the fourth position parameter comprises: using a digital twin technology (See Ueda [0167], “Picture generator 511 generates a monitoring picture on which autonomous vehicle 1 and the moving object in the estimated current position as well as a risk range object are superimposed (S226a). Processing of the step S220 to step S226a mentioned above are repeatedly carried out (N in S227) until the driving of autonomous vehicle 1 is ended (Y in S227).” Here Ueda teaches repeating the steps of generating a monitoring picture during the driving the vehicle. In combination with Yu teaching 3D reconstruction on the image, it reasonable to assume that 3D reconstruction is performed repeatedly as well given that new monitoring pictures are generated. Thus it can be reasoned that the combination would be a considered using “digital twin technology” as the 3D reconstruction would be dynamic in relation to the repeatedly generated monitoring picture.
performing three-dimensional reconstruction on the vehicle based on the second position parameter to obtain a vehicle in three-dimensional form; (See Ueda [0167], “Picture generator 511 generates a monitoring picture on which autonomous vehicle 1 and the moving object in the estimated current position as well as a risk range object are superimposed (S226a).” Since the vehicle is within the image, then it is reasonable that Yu teaching 3D reconstruction would also obtain a vehicle in 3D form.)
performing three-dimensional reconstruction on the mobile object based on the fourth position parameter to obtain a 3D mobile object in three-dimensional form; and (See [0167], “Picture generator 511 generates a monitoring picture on which autonomous vehicle 1 and the moving object in the estimated current position as well as a risk range object are superimposed (S226a).” Since the mobile object is within the image, then it is reasonable that Yu teaching 3D reconstruction would also obtain a mobile object in 3D form.)
performing image rendering on the vehicle in three-dimensional form, and the 3D mobile object, to obtain the driving image. (See Ueda [0094], “Picture generator 511 generates a picture to be displayed on display 54, based on sensed data received from autonomous vehicle control device 10, and two-dimensional or three-dimensional map data.” Here, Ueda does teach to generate a picture for displaying three-dimensional map data, which is basically the idea of image rendering.
Note that Yu teaches 3D reconstruction, and thus to display those 3D models, it would require the well-known technique of image rendering. Thus a combination of Ueda and Yu would teach performing image rendering on the 3D vehicle and mobile object.)
However, Ueda in view of Yu fail to disclose The method according to claim 10, wherein the first driving data comprises a fifth position parameter of a static object around the vehicle; and the performing three-dimensional reconstruction on the driving image of the vehicle at the second moment based on the second position parameter and the fourth position parameter comprises:
using a digital twin technology to perform, based on the fifth position parameter, three-dimensional (3D) reconstruction on the static object, to obtain a static object in three-dimensional form; . . .
performing image rendering on the static object in three-dimensional form, the vehicle in three-dimensional form, and the 3D mobile object, to obtain the driving image.
Liu teaches wherein the first driving data comprises a fifth position parameter of a static object around the vehicle; (See Col 4 Lines 45-54, “The map sensor data 174 includes any data indicating features of a roadway system and/or environment surrounding the vehicle 130 including, but not limited to, traffic lights, traffic signs, curbs, crosswalks, railway tracks, guardrails, poles, bus stops, speed bumps, potholes, overpasses, buildings, and/or the like. It should be understood that the map sensor data 174 includes any type of data indicating features of a roadway system and/or environment surrounding the vehicle 130 and is not limited to the embodiments described herein.” Note that the objects described by Liu are static objects. In combination with Ueda already teaching to detect the position of mobile objects, it would be reasonable to assume that the position of the static objects (fifth position parameter) would be captured as well.)
to perform, based on the fifth position parameter, three-dimensional (3D) reconstruction on the static object, to obtain a static object in three-dimensional form; (See Col 4 Lines 45-54 teaching map sensor data which comprises static objects. Also see Col 5 Lines 2-4, “In some embodiments, the map sensor data 174 and the pose of the vehicle 130 may be utilized to generate the HD map, as described below in further detail with reference to FIG. 4.” Here, using the map sensor data to generate a map would be analogous to Ueda teaching to generate a monitoring image with the vehicle and mobile objects. Thus the combination of the two arts would have the monitoring image include both static and mobile objects. Then once again using the teachings of Yu, 3D reconstruction can be performed on that image to get a static object in 3D form.)
performing image rendering on the static object in three-dimensional form, the vehicle in three-dimensional form, and the 3D mobile object, to obtain the driving image. (See Col 4 Lines 45-54 teaching map sensor data which comprises static objects. As described prior, it would be implied that image rendering is performed in order to display the 3D models on a 2D screen. Thus, with Liu teaching also having static objects on the map, then the image rendering would thus also render the static objects into the driving image.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ueda in view of Yu with Liu to include sensing static objects in the environment.
The motivation to combine Ueda in view of Yu with Liu would have been obvious as both Ueda and Liu are both within the same of processing environment data picked up by sensors on a vehicle (See Liu Abstract). Liu is simply reciting common static objects that would be picked up by the camera sensors. The benefit of including static objects along with the mobile objects and vehicle in the generated monitoring image is that it can give more visual clarity to the user and creates make a more complete picture of the surroundings.
Claims 12 is rejected under 35 U.S.C. 103 as being unpatentable over Ueda in view of Burt et al. (US 20170003668 A1) (Hereinafter referred to as Burt).
Regarding Claim 12, Ueda discloses The method according to claim 1, wherein the calculating, based on the transmission duration and the first driving data, second driving data for representing a driving condition of the vehicle at a second moment comprises: processing the first driving data (See Ueda [0159] teaching using the transmitted position of the vehicle (first driving data) to estimate the current actual position.)
calculating the driving condition of the vehicle at the second moment based on the transmission duration. (See Ueda [0159] teaching calculating an estimated current actual position (driving condition of the vehicle at the second moment) based on a communication delay amount (transmission duration).)
However, Ueda fails to explicitly disclose The method according to claim 1, wherein the calculating, based on the transmission duration and the first driving data, second driving data for representing a driving condition of the vehicle at a second moment comprises: performing structuralization processing on the first driving data to obtain structured first driving data;
classifying the structured first driving data to obtain a plurality of categories of driving data; and
calculating the driving condition of the vehicle at the second moment based on the transmission duration and the plurality of categories of driving data to obtain the second driving data.
Burt teaches performing structuralization processing on the first driving data to obtain structured first driving data; (See [0009], “methods for classifying and standardizing sensors and sensor data using time-series data and/or meta data, . . .” In this scenario, sensor data would encompass first driving data taught by Ueda, and classifying and standardizing corresponds to “structurization”.)
classifying the structured first driving data to obtain a plurality of categories of driving data; and (See [0009], “methods for classifying and standardizing sensors and sensor data using time-series data and/or meta data, . . .”)
calculating the driving condition of the vehicle at the second moment based on the transmission duration and the plurality of categories of driving data to obtain the second driving data. (See Burt [0009] teaching to classify the sensor data. In combination with Ueda already teaching to calculate the driving condition of the vehicle at the second moment which is based on the transmission duration, position, and direction information. In particular, the position and direction information is considered sensor data, and since Burt teaches classifying sensor data, than it can be considered that the driving condition is calculated based on the classified categories of driving data.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ueda with Burt to include classifying and standardizing sensors and sensor data.
The motivation to combine Ueda with Burt would have been obvious as both arts utilize and process sensor data (See Burt Abstract). It should be noted that Ueda already teaches different types of sensor data such as positioning information, speed data, etc. Burt more explicitly defines the process of classifying and standardizing these types of sensor data. Finally, Burt notes in the Abstract that these techniques provides improvements to system monitors and reporting elements.
Claims 13 is rejected under 35 U.S.C. 103 as being unpatentable over Ueda in view of Zhang (CN 108595257 B).
Regarding Claim 13, Ueda discloses The method according to claim 1, wherein the calculating, based on the transmission duration and the first driving data, second driving data for representing a driving condition of the vehicle at a second moment comprises: calculating the second driving data and generating the driving image; ( See Ueda [0159] teaching calculating second driving data and [0167] teaching generating the driving image.)
calculating, based on the transmission duration and the first driving data, the second driving data for representing the driving condition of the vehicle at the second moment. (See Ueda [0159] teaching calculating second driving data based on the first driving data and transmission duration delay.)
However, Ueda fails to explicitly disclose The method according to claim 1, wherein the calculating, based on the transmission duration and the first driving data, second driving data for representing a driving condition of the vehicle at a second moment comprises: obtaining specified duration, the specified duration being estimated duration required for calculating the second driving data and generating the driving image;
determining a sum of the transmission duration and the specified duration as target duration; and
calculating, based on the target duration and the first driving data, the second driving data for representing the driving condition of the vehicle at the second moment.
Zhang teaches obtaining specified duration, the specified duration being estimated duration required for calculating the second driving data and generating the driving image; (See Claim 1, “the task completion time comprises the sum of the calculation time of the task and the communication time of the task,” In this case, the calculation time of the task would be the “estimated duration required for calculating the second driving data and generating the driving image” and thus would correspond to being a “specified duration”.)
determining a sum of the transmission duration and the specified duration as target duration; and (See Claim 1, “the task completion time comprises the sum of the calculation time of the task and the communication time of the task,” In this case, we are taking the sum of the calculation time of the task (specified duration) and communication time of the task (transmission duration) to get the task completion time (target duration).)
calculating, based on the target duration and the first driving data, the second driving data for representing the driving condition of the vehicle at the second moment. (See Claim 1 teaching task completion time (target duration). In combination with Ueda [0156] teaching to take into account communication delay duration calculations, so instead of simply the communication delay, the combination would also take into account the calculation time needed and thus would teach the above claim limitation.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ueda with Zhang to include a task completion time.
The motivation to combine Ueda with Zhang would have been obvious as both arts take into account the delay in communication between remote systems. Zhang simply expands upon that and also takes into account the calculation time needed to complete the task as well. The benefit of accounting for task competition time is that it would provide a more accurate measure of how long it has been since the sensor data has been transmitted to when the monitoring image is generated. Thus, giving a better estimated actual current position of the vehicle. Further, Zhang in Page 2 Paragraphs 1-3 describes cloud computing and the need to optimize and consider the completion time of the tasks.
Allowable Subject Matter
Claims 6, 9, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Claim 6 recites the limitations of wherein the calculating, based on the transmission duration and the first movement parameter, a first traveling distance of the vehicle in the first traveling direction comprises: calculating, in response to that the first traveling direction is a straight direction and the first movement parameter comprises speed, a first traveling distance of the vehicle in the straight direction based on the transmission duration and the speed; and in response to that the first traveling direction is a turning direction and the first movement parameter comprises the speed and an angular velocity, calculating heading angle change amount based on the transmission duration and the angular velocity, calculating a turning radius based on the speed and the angular velocity, and calculating, based on the heading angle change amount and the turning radius, a first traveling distance of the vehicle in the turning direction. Thus Claim 6 contains allowable subject matter.
Claim 9 recites the limitations of wherein the calculating a position of the mobile object at the second moment based on the transmission duration, the third position parameter, the second traffic travel indication parameter, and the second movement parameter, to obtain a fourth position parameter comprises: determining a second traveling direction of the mobile object based on the second traffic travel indication parameter; calculating a second traveling distance of the mobile object in the second traveling direction based on the transmission duration and the second movement parameter; and calculating the fourth position parameter based on the third position parameter and the second traveling distance. Thus Claim 9 contains allowable subject matter.
Claim 19 recites similar limitations as to Claim 6 and therefore also contains allowable subject matter.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THANG G HUYNH whose telephone number is (571)272-5432. The examiner can normally be reached Mon-Thu 7:30am-4:30pm EST | Fri 7:30am-11:30am EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
/T.G.H./Examiner, Art Unit 2611