DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 01/14/2026 have been fully considered but they are not persuasive.
On pages 3-6, Applicant argues that,
“The Examiner has rejected claims 1, 3-5, 7, 11-19, and 22-28
under 35 USC 103 as being unpatentable over Wu et al. (WO
2024/112351 A1 - hereinafter Wu) and Tomar (US 2022/0130064
A1 - hereinafter Tomar).
…
Moreover, in contrast to Wu, instant markings are not features
with pixel positions on the images. Instead, they are
annotations providing additional information about the
portions of capture data. "According to present teachings,
these markings annotate or denote or apply additional
information to portions 106A-N in a number of useful ways.",
see page 37, line 30, through page 38, line 2, of the instant
specification.
Furthermore, in Tomar, the markings are "feature points from
the 3D scene captured at least two positions of the camera,"
or "the current frame". These marking are for tracking or for
making measurements in the 3D space. See paragraphs [0029]
and [0035] of Tomar. Similarly to Wu, Tomar's markings are
features on the scenes or frames themselves. Unlike Wu and
Tomar, the instant markings are not provided on the scenes or
the frames.
Tomar's markings are not for providing additional information
or annotations about the capture data as in the present
design. Furthermore, there is no teaching in Tomar for the
user to enter a marking. In paragraph [0029], Tomar's user is
simply moving with the camera and not entering a marking. In
paragraph [0035], the user is taking a "basis photo" and not
entering a marking as in the present design.”
In response, Examiner respectfully disagrees and submits that Tomar clearly teaches, at least in [0045], the user selects markings coordinates of which have been determined as described at least in [0029] for measurements, e.g. measurement of distance of two feature points. As such, in view of a combination with Wu, which teaches features points are automatically determined, based on which segments are defined and indexed. Tomar teaches those feature points can be selected, thus the user is allowed to apply these feature points for measurements.
On pages 5-6, Applicant argues that,
“A person of ordinary skill in the art (POSA) will not be
motivated to combine Wu with Tomar in order to arrive at the
markings recited in limitations (e) and (b) of claims 1 and
19 respectively. Furthermore, a POSA would not be motivated
to enter the markings by a user as in the present technology.
Based on the above rationale alone, claims 1 and 19 are novel
and non-obvious over Wu and Tomar. The Examiner's rejections
are requested to be withdrawn.”
In response, Examiner respectfully disagrees and submits that Tomar clearly teaches a user is allowed to apply one or more markings (determined feature points) to one or more portions of the video to perform certain measurements. Thus, one of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Tomar into the data capture system taught by Wu to allow the user to perform measurements on objects or surfaces in the scene, thus enhancing the utility of the system as described in the Office Action.
On page 6, Applicant argues that,
“Furthermore, the applicant submits that Wu and Tomar do not
teach limitations (f) and (c) of independent claims 1 and 19
respectively. This is at least because Wu's piecewise
collection of data is not analogous to the instant video and
IMU data decomposition and segmentation scheme as described
in detail in reference to Fig. 24 and the sections entitled
"Video Segmentation or Decomposing video data into Segments"
and "IMU Data Segmentation or Decomposition of IMU data into
Segments" on pages 99 and 106 respectively of the instant
specification.”
In response, Examiner respectfully submits that, without acquiescing to any of characterization, by Applicant, of Wu and Tomar, these arguments are irrelevant because Fig. 24 and sections "Video Segmentation or Decomposing video data into Segments" and "IMU Data Segmentation or Decomposition of IMU data into Segments" on pages 99 and 106 of the specification of current application contain details that are not recited in the limitations (f) and (c) of independent claims 1 and 19.
On pages 6-7, Applicant argues that,
“Wu's piecewise data collection spans across multiple tracking
sessions i.e. "in the database to be collected piecewise over
different motion-tracking sessions", see paragraph [0054] of
Wu. In contrast, the present decomposition of video and IMU
data occurs irrespective of the capture session. In other
words, instant data decomposition scheme allowing for
efficient storage and random-access retrieval of the capture
data is performed even within a capture session i.e. even on
the capture data of a single capture session.
Limitation (d) of claim 1 reciting "one or more portions of
capture data … during a capture session" in conjunction
with limitation (f) teaches that the one or more portions of
the capture data from a single session are decomposed. This
is in stark contrast to Wu whose piecewise data collection is
performed across multiple motion-tracking sessions to allow
its device to collect data from the multiple sessions i.e.
"In a possible implementation, the motion tracking device may
be used over a period that includes a plurality of motion
tracking sessions. see paragraph [0053] of Wu.
A POSA with access to Wu in combination with Tomar would not
arrive at the instant data decomposition and segmentation of
capture data from a single capture session as recited in
claims 1 and 19 and taught in the specification. Based on
this distinction alone, claims 1 and 19 are novel and non-
obvious over Wu and Tomar. The Examiner's rejections are
requested to be withdrawn.”
In response, Examiner respectfully disagrees and submits that, without acquiescing to any of Applicant’s characterization of prior art teachings, for the sake of arguments, Wu teaches, at least in [0053], that,
[0053] The motion-tracking process 611 may be executed by a motion-tracking device while it is in an active state. Accordingly, the method 600 may include detecting that the motion-tracking device is in the active state before executing the motion tracking process. For example, the active state may be detected by an AR application running on the motiontracking device (e.g., AR glasses), which requires motion-tracking. In a possible implementation, the motion tracking device may be used over a period that includes a plurality of motion tracking sessions.
(emphasis added)
As such, the whole-motion tracking process that lasts a period over which the motion tracking device is used corresponds to the recited “capture session”. Such a period, despite including a plurality of motion tracking sessions, is interpreted as a capture session. In other words, each motion tracking session is only part of the capture session. During such a capture session or period (see further Fig. 6 of Wu), one or more portions of video data and IMU data are collected.
On pages 7-8, Applicant argues that,
“The applicant submits that Wu and Tomar further do not teach
or imply limitations (g) and (d) of independent claims 1 and
19 respectively. This is at least because Wu's timestamps are
for merely identifying its tracking sessions in the database.
"In other words, the IMU data and the camera image can be
indexed (i.e., identified) in the database 660 based on
timestamps. This allows for the data in the database 660 to
be collected piecewise over different motion-tracking
sessions, as identified by time (e.g., t1, t2, etc.) to be
used for a calibration process 645", see paragraph [0054] of
Wu. For example, in Wu, a timestamp of "Jan 12, 2026 8am PST"
would identify one tracking session and a timestamp of "Jan
13, 2026 8am PST" would identify another tracking session.
This is not the case in the instant design.
In the present technology, while timestamps are part of
indexing, one timestamp does not represent the entirety of a
capture session. Instead, a timestamp in combination with a
duration value (and other fields) are used to index a video
segment and Not a capture session. As such, multiple
timestamps can be associated with different video segments of
the same capture session. At the time of retrieval, the
timestamps are used to re-assemble the video and IMU data as
and when required. See, page 103, line 10 through page 104,
line 26 of the instant specification.
A POSA with access to Wu and Tomar would not arrive at the
indexes recited in limitations (g) and (d) of independent
claims 1 and 19 respectively and as taught in the instant
specification. Based on this reasoning alone, independent
claims 1 and 19 are novel and non-obvious over Wu and Tomar.
The Examiner's rejections are requested to be withdrawn.”
In response, Examiner respectfully disagrees and submits that Wu, as discussed above, teaches one or more portions of data are collected during a motion tracking process which corresponds to the capture session recited in the claim. As such, there are a plurality of segments of data during a capture session, each is identified by a corresponding timestamp as described in [0054] of Wu.
On pages 8-9, Applicant argues that,
“The applicant further submits that Wu and Tomar do not teach
or imply the non-sequential visual inertial odometry (VIO) as
recited in limitations (h) and (e) of independent claims 1
and 19 respectively. Non-sequential VIO of the instant design
is a key contribution of the present technology to the field.
As taught throughout the specification, instant non-
sequential VIO processes unordered/non-sequential portions of
capture data by virtue of its innovative design. For example,
see page 19, lines 1-8; page 50, lines 14-19 and page 63,
lines 20-29. As a consequence, the present technology allows
for out-of-order processing, parallelization, and resilience
to missing video data.
No such teaching of non-sequential VIO on unordered data is
available from Wu and Tomar, singly or in combination, in the
Examiner-applied passages or elsewhere. Wu is absolutely
silent about whether it can or cannot process its video or
IMU data in an out-or-order manner or non-sequentially.
Unlike claims 1 and 19, the Examiner-applied passages of Wu
or any other teachings of Wu say nothing about any non-
sequential aspects of its VIO. As is the case with other prior
art, Wu appears to only be able to process data sequentially
and hence does not read on instant claims 1 and 19.”
In response, Examiner respectfully disagrees and submits that, without acquiescing to any of Applicant’s characterizations of prior art teachings, Wu, at least in [0025], teaches:
[0025] The motion-tracking device 100 further includes an inertial measurement unit (i.e., IMU). The IMU can include a plurality of sensors that are aligned with a reference coordinate system having three dimensions (i.e., X, Y, Z). An IMU of a device may be configured to track its changes in position/orientation (i.e., track its motion) with respect to each of the three dimensions. The IMU measurement can be combined with the camera measurement described previously to help track the movement of the motion-tracking device. This form of motion tracking may be referred to as visual inertial odometry (VIO).
(emphasis added)
In the emphasized text above, the IMU measurement data and the camera measurement data to be combined together are supplemental to each other, and neither ordered nor sequential with respect to each other. In other words, the video data are not sequential to or ordered after or before the IMU measurement data that is used to combine with it for the motion tracking. Instead, the video data and the IMU measurement data are captured at a same time and associated with each other via a timestamp (see Wu, [0053]).
On pages 9-10, Applicant argues that,
“Furthermore, limitations (h) and (e) of independent claims 1
and 19 respectively clearly recite the estimation of a
velocity profile by employing its non-sequential VIO and the
innovative marking scheme. As taught by the specification,
the present technology then performs constrained integration
on the velocity profile to obtain the position and orientation
of the capture apparatus. See at least page 14, lines 10-13;
page 58, lines 15-19 and page 66, line 4 through page 68,
line 27 of the instant specification.
No such teaching of obtaining a velocity profile as recited
in limitation (h) and (e) of claims 1 and 19 respectively is
available from Wu and Tomar singly or in combination,
implicitly or explicitly. In fact, the term profile does not
even appear in Wu and Tomar.
A POSA with access to Wu and Tomar would not arrive at the
instant non-sequential VIO performed on unordered segments of
data, and at estimating a velocity profile of the capture
apparatus based on the non-sequential VIO and the markings.
Based on this rationale alone, independent claims 1 and 19
are novel and non-obvious over Wu and Tomar. The Examiner's
rejections are requested to be withdrawn.
Thus, based on the above reasoning and rationale, independent
claims 1 and 19 are novel and unobvious over Wu and Tomar and
are patentable over all prior art of record. It follows that
their dependent claims are also novel and nonobvious and thus
patentable. The Examiner's rejections are requested to be
withdrawn.”
In response, Examiner respectfully submits that these arguments are irrelevant because details of “perform[ing] constrained integration on the velocity profile to obtain the position and orientation of the capture apparatus. See at least page 14, lines 10-13; page 58, lines 15-19 and page 66, line 4 through page 68, line 27 of the instant specification” are not present in the claims. Wu teaches, at least in [0025], that,
[0025] The motion-tracking device 100 further includes an inertial measurement unit (i.e., IMU). The IMU can include a plurality of sensors that are aligned with a reference coordinate system having three dimensions (i.e., X, Y, Z). An IMU of a device may be configured to track its changes in position/orientation (i.e., track its motion) with respect to each of the three dimensions. The IMU measurement can be combined with the camera measurement described previously to help track the movement of the motion-tracking device. This form of motion tracking may be referred to as visual inertial odometry (VIO).
(emphasis added)
Examiner respectfully submits that tracking the device’s changes in position is sufficient to providing a velocity profile of the device. No need for mentioning the word “profile” is required.
On page 11, Applicant argues that,
“Furthermore, in regards to claim 5, Wu's timestamp filter is
not analogous to the timestamp filter recited in claim 5. As
taught in the specification and per above explanation, Wu's
timestamps or on a per-session basis. In contrast, each of
the instant video/IMU segments have a timestamp and a duration
value in the index. As such, there can be multiple timestamps
per (capture) session in the instant technology, as opposed
to Wu. Based on this distinction alone, claim 5 is novel and
unobvious over Wu and Tomar and is thus patentable in its own
right.”
In response, Examiner respectfully disagrees and submits that each of the timestamps taught by Wu identifies a segment among a plurality of segments within a capture session, i.e. the tracking process shown in Fig. 6 of Wu (with respect to interpretation of “a capture session”, see discussion of Wu above). The claim does not recite any duration value specified within the index. As such, the arguments are not persuasive.
On page 11, Applicant also argues that,
“In regards to claim 7, Wu does not teach or imply storing
accelerometer and gyroscope segments in an array as recited
in claim 7. While Fig. 3 of Wu shows a system block diagram
of its IMU, Wu is silent about how the data is actually
stored. A POSA with access to Wu/Tomar would not arrive at
storing accelerometer and gyroscope data in the data
structure of an array. Based on this distinction alone, claim
7 is novel and unobvious over Wu and Tomar and is thus
patentable in its own right.”
In response, Examiner respectfully disagrees and submits that, at least in [0026] and Fig. 3, Wu teaches:
[0026] FIG. 3 is a system block diagram of an IMU for a motion-tracking device, such as shown in FIG. 1. The IMU 300 may output a motion measurement having six components (i.e., 6 degrees of freedom) including a first acceleration in an x-direction (i.e., ax), a second acceleration in a y-direction (i.e., ay), a third acceleration in a z-direction (i.e., az), a first rotation (i.e., Rx) about an x-axis (ROLL), a second rotation (i.e., Ry) around ay-axis (PITCH), and a third rotation (i.e., Rz) around a z-axis (YAW). The six components are relative to a coordinate system (X, Y, Z) that may be aligned with, or define, a coordinate system of the motion-tracking device.
(emphasis added)
Specifically, clearly the output of IMU data as shown in Fig. 3 comprises gyroscope data (outputted by gyroscope 310) and acceleration data (outputted by accelerometer 320). Further, as described in at least [0053], IMU data comprises a plurality of segments. Thus, each segment of IMU data comprises a corresponding segment of gyroscope data and acceleration data. There are a plurality of segments of IMU data stored in a database as discussed above. As a result, there are a corresponding plurality of segments of gyroscope data and acceleration data stored in a database. Examiner interprets the data structure storing such segments is an array.
On pages 11-12, Applicant argues that,
“In regards to claims 14, 17, 24 and 27, in paragraph [0041]
Wu is uploading its IMU/camera data itself. In a stark and
innovative contrast to such data uploading, the present
design only transmits the indices to remote storage as recited
in the above claims. Further, as taught in the specification,
an instant device only transmits the indices pertaining to
the video, IMU, the markings and the corresponding location
information through database table replication. The actual
video and IMU data for a specific capture session is only
uploaded from the capture device when the data is requested
by the user/inspector for processing.
The idea behind the instant design is that not all the data
may be significant, and the user may only want to process
data for a small time-duration. However, Wu's scheme needs to
send data back and forth either to determine calibration
parameters or to determine active or idle state. In contrast
to Wu, the amount of data transferred to the remote storage
in the present technology is much smaller since only indices
are being sent and not the data itself. Now, since the data
is segmented and saved on an instant device, when requested
by the user, multiple video and IMU segments can then be
uploaded parallelly for the processing to be performed.
A POSA with access to Wu/Tomar would not arrive at the instant
innovative design of sending just indices and not the data to
the remote storage as recited in claims 14, 17, 24 and 27.
Based on this distinction alone, claims 14, 17, 24 and 27 are
novel and unobvious over Wu/Tomar and are patentable in their
own right.”
In response, Examiner respectfully submits that Wu and Tomar teach storing collected video, IMU data, and markings onto a remote storage. Wu and Tomar do not explicitly teaches storing the indices. The Office Action relies on common practice of storing indices generated from the data as well to facilitate accessing the corresponding data.
On pages 13-14, Applicant argues that,
“The Examiner has also rejected claims 2 and 20 under 35 USC
103 as being unpatentable over Wu and Tomar in view of Collins
et al. (US 2009/0265193 A1 - hereinafter Collins)
The applicant respectfully disagrees. This is at least
because the instant companion device provides status updates
to the user as recited in claims 2 and 20. In contrast,
Collins' device provides "Live video images of the inspection
from the inspection robot can be displayed on a video monitor
of the inspection control system", see paragraph [0073] of
Collins. In the instant design, the companion device is a
handheld or a wearable device, including a smartphone, a
smartwatch, a tablet, a wearable computer, AR goggles, among
others.
The instant companion device allows the user to perform
actions such as power/turn on/off the camera (s) of the capture
apparatus, shows a user prompt to wear the helmet with the
capture apparatus, and shows the icon to start/stop
recording. There is also a cloud icon with a text legend
reporting the number of pending files that are yet to be
uploaded to the remote storage base. Furthermore, the device
allows the user to enter a waypoint and to add a voice memo
as markings.
Consequently, the information package between an instant
companion device and the capture apparatus is deliberately
kept small in order to only show status and actions to be
performed. The live images of Collins are not required in the
instant design and would unduly bloat the payload between the
instant companion device and the capture apparatus.
Based on the above distinction alone, claims 2 and 20 are
novel and unobvious over Wu/Tomar and Collins and are thus
patentable in their own right.”
In response, Examiner respectfully disagrees and submits that the claims only require a companion device to issue commands to data capture system (to first computer application running on the system) and to provide status updates to the user. There is nothing regarding:
… a handheld or a wearable device, including a smartphone, a
smartwatch, a tablet, a wearable computer, AR goggles, among
others.
The instant companion device allows the user to perform
actions such as power/turn on/off the camera (s) of the capture
apparatus, shows a user prompt to wear the helmet with the
capture apparatus, and shows the icon to start/stop
recording. There is also a cloud icon with a text legend
reporting the number of pending files that are yet to be
uploaded to the remote storage base. Furthermore, the device
allows the user to enter a waypoint and to add a voice memo
as markings.
Consequently, the information package between an instant
companion device and the capture apparatus is deliberately
kept small in order to only show status and actions to be
performed.
As such, these details are irrelevant because they are not present in the claims. As described in the Office Action, Collins, at least in Fig. 6A and [0073], teaches a computer system, such as a laptop, or handheld device with the appropriate software configured to issue commands to a first computer application running on a robotic vehicle and to provide updated images as status updates to a user.
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Collins into the data capture system taught by Wu and Tomar to enhance the control interface of the system, e.g. controlling a system using a companion device in a same way as using a remote control to a television set.
As such, Applicant’s arguments are not persuasive.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-5, 7, 11-19, and 22-28 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (WO 2024/112351 A1 – hereinafter Wu) and Tomar (US 2022/0130064 A1 – hereinafter Tomar).
Regarding claim 1, Wu discloses a data capture system comprising: (a) a capture apparatus containing at least one camera and an inertial measurement unit (IMU) ([0042]-[0046]; Fig. 4 – an AR glasses as a data capture system comprising at least a camera, e.g. any of the first camera 410 or the second camera 411 and an IMU); (b) a first set and a second set of computer-readable instructions stored in a first non-transitory storage medium and a second non-transitory storage medium respectively, and at least one microprocessor coupled to said first non-transitory storage medium for executing said first set of computer-readable instructions, and at least one microprocessor coupled to said second non-transitory storage medium for executing said second set of computer-readable instructions ([0020]; [0037]; [0076] – at least processor 150 executing first set of instructions stored on a first part of the memory 160 and a second set of instructions stored on a second part of the memory 160 shown in Fig. 1); (c) said first set of computer-readable instructions causing a first computer application ([0020]; [0037]) to: (d) collect one or more portions of capture data while said capture apparatus is carried by a user undergoing motion at a site during a capture session, wherein said capture data comprises video data and IMU data produced by said at least one camera and said IMU respectively ([0021]-[0032] – collecting one or more portions of video data captured by the first camera and/or the second camera and IMU data captured by the inertial measurement unit further shown in Fig. 3); (e) allow to applying one or more markings to said one or more portions (Fig. 2; [0020]; [0022] – applying markings to identified features in said one or more portions); (f) decompose said one or more portions into a plurality of video segments and a plurality of IMU segments ([0054] – decomposing said portions into a plurality of video segments and a plurality of IMU segments, i.e. each segment is defined by a timestamp which serves as an index to the segment in a database); (g) index said plurality of video segments and said plurality of IMU segments by employing a video index and an IMU index a respectively ([0054] – indexing the plurality of video segments and a plurality of the IMU segments using timestamps as an indices to the segments in a database); and (h) said second set of computer-readable instructions causing a second computer application ([0076]) to estimate a velocity profile of said capture apparatus by performing non-sequential visual inertial odometry (VIO) on said plurality of video segments and said plurality of IMU segments, and by employing said one or more markings ([0025] – performing non-sequential visual inertial odometry by combining said plurality of video segments and said plurality of IMU segments by employing one or more markings as described in [0022]).
Wu does not disclose (c) said first set of computer-readable instructions causing a first computer application to: (e) allow said user to apply the one or more markings; (g) index said one or more markings by employing a markings index.
Tomar disclose (c) a first set of computer-readable instructions causing a first computer application to: (e) allow a user to apply one or more markings to one or more portions of video data ([0029]; [0035] – allowing a user to mark a frame and/or feature points); (g) index said one or more markings by employing a markings index ([0029]; [0035] – indexing the feature points so that they can be made available by calling routines).
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Tomar into the data capture system taught by Wu to allow the user to perform measurements on objects or surfaces in the scene, thus enhancing the utility of the system.
Regarding claim 3, Wu in view of Tomar also discloses the data capture system of claim 1, wherein said at least one microprocessor executing said first set of computer-readable instructions of said first computer application is integrated with said at least one camera in a common housing (Fig. 4; [0045]-[0046]).
Regarding claim 4, Wu in view of Tomar also discloses the data capture system of claim 1, wherein said capture apparatus comprises an embedded computer containing said at least one microprocessor executing said first set of computer-readable instructions of said first computer application ([0020]; [0037]; [0076] – at least processor 150 executing first set of instructions stored on a first part of the memory 160).
Regarding claim 5, see the teachings of Wu and Tomar as discussed in claim 1 above. Wu also discloses said plurality of video segments are produced by a video pipeline comprising a timestamp filter ([0053] – at least a video pipeline comprising a timestamp filter to read timestamps of the video data).
However, Wu and Tomar do not disclose the video pipeline comprising a scaler, a transposer, a GPU encoder and an HTTP Live Streaming (HLS) multiplexer.
Official Notice is taken that a video pipeline comprising a scaler, a transposer, a GPU encoder and an HTTP Live Streaming (HLS) multiplexer is well known in the art.
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate such a video processing pipeline to produce the video segments in the data capture system taught by Wu to provide video segments of desired size and view, encoded, and multiplexed to provide video segments optimized for storage or transmission.
Regarding claim 7, Wu in view of Tomar also discloses the data capture system of claim 1, wherein said plurality of IMU segments comprise a plurality of accelerometer segments and a plurality of gyroscope segments, and wherein said plurality of accelerometer segments and said plurality of gyroscope segments are stored in an array (Fig. 3; [0026]-[0027]; [0038]; [0053]).
Regarding claim 11, Wu in view of Tomar also discloses the data capture system of claim 1, wherein said plurality of video segments and said plurality of IMU segments are uploaded from a local storage to a remote storage according to a data storage and upload scheme ([0041] – uploading for remote storage of the IMU/camera data).
Regarding claim 12, Wu in view of Tomar also discloses the data capture system of claim 11, wherein said plurality of video segments and said plurality of IMU segments can be read from one of said local storage and said remote storage by a random-access retrieval based on said video index and said IMU index respectively ([0053]).
Regarding claim 13, Wu also discloses the data capture system of claim 11, wherein said capture apparatus is an always-on device (AOD) ([0020]; [0055] – always-on even when it is in an idle state).
Regarding claim 14, Wu in view of Tomar also discloses the data capture system of claim 13, wherein said video, said IMU data and said markings are copied to said remote storage by table replication ([0041] – uploading for remote storage of the IMU/camera data in view of Tomar teaching said markings as discussed in claim 1 above).
However, Wu does not explicitly disclose copying the respective indices.
Official Notice is taken that one of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate copying the respective indices, i.e. timestamps described in paragraph [0053] and shown in Fig. 6 of Wu, to the remote storage in the same process as copying the respective data in order to ensure data integrity, i.e. allowing the data accessed the same manner using the indices.
Regarding claim 15, see the teachings of Wu and Tomar as discussed in claim 13. However, Wu and Tomar do not disclose said data storage upload scheme utilizes a bidirectional WebSocket connection established by a messaging service.
Official Notice is taken that a bidirectional WebSocket connection established by a messaging service is well known in the art.
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate a bidirectional WebSocket connection established by a messaging service to the data storage upload scheme taught by Wu and Tomar because WebSockets connection provides a lower overhead since the connection is kept alive, which eliminates the need to establish a new connection for every request, resulting in reduced latency and network traffic.
Regarding claim 16, see the teachings of Wu and Tomar as discussed in claim 11 above. However, Wu and Tomar do not disclose said capture apparatus is an on-off device (OOD).
Official Notice is taken that capture apparatus is an on-off device (OOD) is well known in the art.
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate an on-off device into the apparatus taught by Wu and Tomar to reduce power consumption while the device is not in use.
Regarding claim 17, see the teachings of Wu and Tomar as discussed in claim 16 above. However, Wu and Tomar do not disclose said first set of computer-readable instructions further cause said first computer application to transmit one or more entries of said video index, said IMU index and said markings index to said remote storage via a RESTful (Representational State Transfer) API over HTTP.
Official Notice is taken that (1) transmitting data (including indices) to remote storage and (2) transmitting data via a RESTful (Representational State Transfer) API over HTTP are well known in the art.
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate transmitting entries to said remote storage via a RESTful (Representational State Transfer) API over HTTP into the first computer application because (1) data integrity is maintained, i.e. the data can be accessed using the indices in the same manner, and (2) the API is easy to understand and use.
Regarding claim 18, see the teachings of Wu and Tomar as discussed in claim 16 above. However, Wu and Tomar do not disclose said data storage and upload scheme utilizes a background process that uploads said plurality of video segments and said plurality of IMU segments to said remote storage.
Official Notice is taken that data storage and upload scheme utilizes a background process that uploads data to a remote storage is well known in the art.
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate utilizing a background process into the data storage and upload scheme in the data capture system taught by Wu and Tomar in order to avoid interference with user’s experience.
Claim 19 is rejected for the same reason as discussed in claim 1 above.
Claim 22 is rejected for the same reason as discussed in claim 11 above.
Claim 23 is rejected for the same reason as discussed in claim 13 above.
Claim 24 is rejected for the same reason as discussed in claim 14 above.
Claim 25 is rejected for the same reason as discussed in claim 15 above.
Claim 26 is rejected for the same reason as discussed in claim 16 above.
Claim 27 is rejected for the same reason as discussed in claim 17 above.
Regarding claim 28, see the teachings of Wu and Tomar as discussed in claim 26 above. However, Wu and Tomar do not disclose said data storage and upload scheme utilizes a background process that uploads said plurality of video segments and said plurality of IMU segments to said remote storage via HTTP file uploaded.
Official Notice is taken that data storage and upload scheme utilizes a background process that uploads data to a remote storage via HTTP file uploaded is well known in the art.
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate utilizing a background process via HTTP file uploaded into the data storage and upload scheme in the data capture system taught by Wu and Tomar in order to avoid interference with user’s experience.
Claims 2 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wu and Tomar as applied to claims 1, 3-5, 7, 11-19, and 22-28 above, and further in view of Collins et al. (US 2009/0265193 A1 – hereinafter Collins).
Regarding claim 2, see the teachings of Wu and Tomar as discussed in claim 1 above. However, Wu and Tomar do not disclose a companion device containing at least one microprocessor coupled to a third non-transitory storage medium for executing a third set of computer-readable instructions, wherein said third set of computer-readable instructions cause a third computer application to issue commands to said first computer application and to provide status updates to said user.
Collins discloses a companion device containing at least one microprocessor coupled to a third non-transitory storage medium for executing a third set of computer-readable instructions, wherein said third set of computer-readable instructions cause a third computer application (Fig. 6A; [0073] – a computer system, such as a laptop, or handheld device with the appropriate software) to issue commands to a first computer application ([0073] – issuing control commands to a first computer application on a robotic vehicle) and to provide status updates to said user ([0073] – providing updated images).
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Collins into the data capture system taught by Wu and Tomar to enhance the control interface of the system.
Claim 20 is rejected for the same reason as discussed in claim 2 above.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Wu and Tomar as applied to claims 1, 3-5, 7, 11-19, and 22-28 above, and further in view of Nagashima et al. (US 2008/0165936 A1 – hereinafter Nagashima).
Regarding claim 6, see the teachings of Wu and Tomar as discussed in claim 1 above. However, Wu and Tomar do not explicitly disclose said video segments are produced by a video pipeline comprising a hardware video encoder and an MPEG multiplexer.
Nagashima discloses video segments are produced by a video pipeline comprising a hardware video encoder and an MPEG multiplexer ([0044]; [0054]; Fig. 4).
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Nagashima into the data capture system taught by Wu and Tomar to reduce the storage space of the video segments.
Claims 8 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Wu and Tomar as applied to claims 1, 3-5, 7, 11-19, and 22-28 above, and further in view of Salgian et al. (US 2019/0347783 A1 – hereinafter Salgian).
Regarding claim 8, see the teachings of Wu and Tomar as discussed in claim 1 above. However, Wu and Tomar do not explicitly disclose said at least one camera is attached to one of a helmet worn by said user during said capture session and a monopod carried by said user during said capture session.
Salgian discloses at least one camera is attached to one of a helmet worn by said user during said capture session and a monopod carried by said user during said capture session (Fig. 3; [0027] - a helmet mounted stereo camera and IMU sensor 302).
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Salgian into the data capture system taught by Wu and Tomar to employ the system conveniently in risky environment that requires safety to be provided for the user.
Claim 21 is rejected for the same reason as discussed in claim 3 above.
Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Wu and Tomar as applied to claims 1, 3-5, 7, 11-19, and 22-28 above, and further in view of Chattopadhyay et al. (WO 2014/057496 A2 – hereinafter Chattopadhyay).
Regarding claim 9, see the teachings of Wu and Tomar as discussed in claim 1 above. However, Wu and Tomar do not explicitly disclose said video index comprises a plurality of entries corresponding to said plurality of video segments, and wherein each of said plurality of entries comprises a starting timestamp, a duration, a camera label and a resource locator of the video segment corresponding to said each of said plurality of entries.
Chattopadhyay discloses a video index comprises a plurality of entries corresponding to said plurality of video segments, and wherein each of said plurality of entries comprises a starting timestamp, a duration, a camera label and a resource locator of the video segment corresponding to said each of said plurality of entries (page 11 - cumulative metadata comprises of event id, event type, video url, UTC timestamp of event occurrence, start time of the event, frame number, duration, location of the event, camera id).
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Chattopadhyay into the data capture system taught by Wu and Tomar to provide more context information about the video segments, which is helpful in identifying the video segments.
Regarding claim 10, see the teachings of Wu and Tomar as discussed in claim 1 above. However, Wu and Tomar do not explicitly disclose said IMU index comprises a plurality of entries corresponding to said plurality of IMU segments, and wherein each of said plurality of entries comprises a starting timestamp, a duration, a sensor label and a resource locator of the IMU segment corresponding to said each of said plurality of entries.
Chattopadhyay discloses a data index comprises a plurality of entries corresponding to said plurality of data segments, and wherein each of said plurality of entries comprises a starting timestamp, a duration, a sensor label and a resource locator of the data segment corresponding to said each of said plurality of entries (page 11 - cumulative metadata comprises of event id, event type, video url, UTC timestamp of event occurrence, start time of the event, frame number, duration, location of the event, camera id).
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Chattopadhyay into the IMU segments in the data capture system taught by Wu and Tomar to provide more context information about the IMU segments, which is helpful in identifying the IMU segments.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNG Q DANG whose telephone number is (571)270-1116. The examiner can normally be reached IFT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Q Tran can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUNG Q DANG/Primary Examiner, Art Unit 2484