DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments, filed 12/29/2025, have been entered and made of record. Claims 1, 2, 4-12, 14-20 have been amended. Claims 21 and 22 have been added. Claims 1, 2, 4-12, and 14-22 are pending.
Response to Arguments
Applicant’s arguments in the Remarks filed on 12/29/2025 have been considered but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Messia in view of Shikata
Claims 1, 5-11, and 15-22 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Messia et al.(USPubN 2023/0171568; hereinafter Messia) in view of Shikata et al.(USPubN 2019/0199992; hereinafter Shikata).
As per claim 1, Messia teaches a method comprising: receiving a communication from one or more devices capable of recording content(“A method of managing and sharing information based upon telematic information related to mobile devices includes one or more processors receiving telematic data from one or more mobile devices“ in Abs);
determining, using a wireless communication transceiver configured to send directional wireless data to the one or more devices, a geographical location and an orientation of the one or more devices(“the system includes three different components all working together: mobile devices which collect and send GPS and other telematic data; a processing entity such as an administrator processor which can view, manage, and initiate sharing of the information based upon telematic data related to the mobile devices; and external third party devices which receive the shared data and can view the results of the data and information which has been shared with them” in Para.[017], “Mobile devices may be any device or set of devices that can capture and transmit GPS data and other telematic data including video data. One or more mobile devices collects GPS data, as well as additional device and environment data and sends that data to a server. The mobile devices may be cell phones, tablets, GPS hardware, cars, trucks, planes, trains, boats, motorcycles, bicycles or any other device where GPS can be tracked. Telematic data a device may capture, in addition to location, could be heading, speed, breaking, engine idle, door open, lights activated, stop arm (in the case of a school bus).” in Para.[0020]);
receiving content capturing an event and recorded on the one or more devices(“The processors receive trigger criteria from at least one processing entity. The trigger criteria is based upon the telematic data and represents an event to initiate a request to share information relating to the mobile devices” in Abs, “A plurality of mobile devices 114 are included as part of the system. These mobile devices may include automobiles, smart phones and other communication devices, mobile computers and tablets, trains, flying objects such as airplanes, bicycles, buses, trucks, watercraft, motorcycles, and trolleys or segways, as well as any other object which is capable of movement in which may generate telematic and information including GPS coordinates. The mobile devices 114 in accordance with the system, may be designated individually or collectively in groups to share information based upon their telematic data. Information regarding which mobile devices may share information and what information they may each share may be stored in the memory associated with the processors” in Para.[0024]);
storing the content capturing the event and recorded on the one or more devices; and creating, from a collection of recordings comprising at least the stored content capturing the event and recorded on the one or more devices, a representation of the event by combining segments of the collection of recordings (“The mobile devices 114 are enabled to send telematic data 111 including GPS data to the system 100 and the processors and/or servers 102. Third party devices 116 are designated and enabled to view the results of selectively managed and shared information based upon telematic information related to mobile devices 114. The third party devices 116 are allowed to view shared information from and relating to the mobile devices when a mobile device 114 is subject to a trigger criteria 118 based upon the telematic data or information. The trigger criteria 118 is defined by the processing entity 117, which may correspond to a system administrator device 117. Within the system 100 and memory associated therewith, the trigger criteria 118 and telematic information 111 may be stored in any desirable format. A process or algorithm 122 performed by the processors 102 uses the trigger criteria 118 and telematic data 111 to compare and process the same to determine if a trigger criteria has been met. Processes and algorithms involve calculations to compare the trigger criteria 118 with telematic data 111. The calculations generate a result 124 when one or more criteria of the trigger criteria are met. The result of such calculations may also be stored in memory and used for future use if desired. The system 100 will designate the mobile object as being allowed to share information including telematic information when the trigger criteria is met for any particular mobile device 114. The system 100 will then initiate a request to share information with and to authorized third party devices 116. The authorized third party devices may be defined by the processing entity 117 and/or the mobile devices 114. For example, each mobile device may have one or more third party devices associated therewith which are allowed to receive shared information when a trigger criteria has been met. The trigger criteria may be the same for each mobile device 114 or may be different for each mobile device, or may be a combination thereof” in Para.[0027], Para.[0030], Para.[0036]).
Messia is silent about wherein the representation comprises a synthetic image generated by combining at least two different perspectives of the event captured from different orientations of the one or more devices to generate a third perspective of the event, and wherein the third perspective of the event provides a viewing angle that was not captured by the one or more devices.
Shikata teaches wherein the representation comprises a synthetic image generated by combining at least two different perspectives of the event captured from different orientations of the one or more devices to generate a third perspective of the event, and wherein the third perspective of the event provides a viewing angle that was not captured by the one or more devices(“the image processing system includes the plurality of cameras installed in the stadium, an image generation apparatus 200, an information processing apparatus 100, and a user terminal 300. The plurality of cameras is connected with each other via transmission cables. The plurality of cameras transmits captured images to the image generation apparatus 200. Referring to an example illustrated in FIG. 1, the plurality of cameras is disposed so as to capture an entire range or a part of the range of the stadium such as a soccer stadium. Each of the plurality of cameras may be a camera for capturing a still image, a moving image, or both a still image and a moving image” in Para.[0019], “generating a virtual viewpoint image. The image generation apparatus 200 accumulates images captured by the plurality of cameras. The image generation apparatus 200 generates a virtual viewpoint image group by using images captured with the plurality of cameras. A virtual viewpoint image group includes a plurality of virtual viewpoint images having different viewpoints.” in Para.[0020]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings Messia with the above teachings of Shikata in order to improve user experience.
As per claim 5, Messia and Shikata teach all of limitation of claim 1.
Messia teaches wherein at least the one of the one or more devices comprises a smartphone(Para.[0016]).
As per claim 6, Messia and Shikata teach all of limitation of claim 1.
Messia teaches wherein at least one of the one or more devices comprises a device mounted on a drone (Para.[0016]).
As per claim 7, Messia and Shikata teach all of limitation of claim 1.
Messia teaches wherein the geographical location of the one or more devices is determined based on decentralized communication among the or more devices and a second device(Para.[0016], Fig. 1).
As per claim 8, Messia and Shikata teach all of limitation of claim 1.
Messia teaches wherein the creating a representation of the event further comprises associating the segments of the collection of recordings with a live broadcast(Para.[0021]).
As per claim 9, Messia and Shikata teach all of limitation of claim 1.
Messia teaches further comprising causing display, via a user interface, of an incentive prompt to one or more users of the one or more devices offering a revenue share to record content of the event(“Once a trigger criteria is met and a mobile device is allowed to share information, a notification will be sent to the user of one or more corresponding designated external third-party devices. These external third-party devices are preferably not devices that are part of the data managed through the administration processing entity and do not require the administrator to have any information about the devices outside of a user's contact information such as email or phone number for notifications. Notifications can include text, email, or push notifications or any combination thereof. The notification may include an accessible link such as a URL; this link may give the external third-party device access to a web or other interface that will allow them to see the selected mobile devices and the shared information for the configured time range or when other specified trigger criteria has been met. The third-party device user can see the data in real time live in the given time range, specified geographic area, or for specified events. More than one mobile device can be configured for sharing at a single time or individual third-party configurations of device data to third-party users and devices may be setup” in Para.[0021], “when both events occur, based upon the telematic data indicating the same, a request to share information is initiated by the system. Finally, the information shared by each mobile device may be different (or the same). For example, in the above example, the bus may share information about its location and a video of its movement at the time of the trigger criteria while the smartphones may share information about their geographic location and video of the passenger/smart phone user” in Para.[0030], “the system may be connected to a personal computer, tablet device or smartphone to communicate the assigned or unassigned status of mobile objects to user via a graphical user interface.” in Para.[0058]).
As per claim 10, Messia and Shikata teach all of limitation of claim 9.
Messia teaches wherein the revenue share corresponds to a relative proportion of content contributed by one or more users of the one or more devices to the representation of the event(Para.[0021], [0030]).
As per claim 11, the limitations in the claim 11 has been discussed in the rejection claim 1 and rejected under the same rationale.
As per claim 15, the limitations in the claim 15 has been discussed in the rejection claim 5 and rejected under the same rationale.
As per claim 16, the limitations in the claim 16 has been discussed in the rejection claim 6 and rejected under the same rationale.
As per claim 17, the limitations in the claim 17 has been discussed in the rejection claim 7 and rejected under the same rationale.
As per claim 18, the limitations in the claim 18 has been discussed in the rejection claim 8 and rejected under the same rationale.
As per claim 19, the limitations in the claim 19 has been discussed in the rejection claim 9 and rejected under the same rationale.
As per claim 20, the limitations in the claim 20 has been discussed in the rejection claim 10 and rejected under the same rationale.
As per claim 21, Messia and Shikata teach all of limitation of claim 1.
Messia is silent about wherein the synthetic image is further generated by: transforming respective image planes of the at least two different perspectives of the event; and determining pixel values of the synthetic image based at least in part on the respective transformed image planes and objects of the at least two different perspectives of the event.
Shikata teaches wherein the synthetic image is further generated by: transforming respective image planes of the at least two different perspectives of the event; and determining pixel values of the synthetic image based at least in part on the respective transformed image planes and objects of the at least two different perspectives of the event(“Although the following description will be made on the premise that the viewpoint determination unit 105 determines a position on three-dimensional coordinates as the position of the viewpoint for the virtual viewpoint image, the viewpoint determination unit 105 may be configured to determine a position on a two-dimensional plane” in Para.[0033], “The image generation apparatus 200 may generate, instead of generating a virtual viewpoint image group, information for generating a virtual viewpoint image, such as information that indicates a three-dimensional model and an image subjected to mapping to the three-dimensional model. In other words, the virtual viewpoint image generation unit 205 may generate, instead of generating a rendered virtual viewpoint image, information required for the information processing apparatus 100 or the user terminal 300 to render the virtual viewpoint image. The rotation base point data generation unit 206 outputs, as the rotation base point position, position information about a specific object or specific position acquired by the image analysis unit 207 (described below), to the virtual viewpoint image generation unit 205.” in Para.[0041]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings Messia with the above teachings of Shikata in order to improve user experience.
As per claim 22, the limitations in the claim 22 has been discussed in the rejection claim 21 and rejected under the same rationale.
Messia in view of Shikarta and Racz
Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Messia et al.(USPubN 2023/0171568; hereinafter Messia) in view of Shikata et al.(USPubN 2019/0199992; hereinafter Shikata) further in view of Racz et al.(USPubN 2018/0330112).
As per claim 2, Messia and Shikata teach all of limitation of claim 1.
Messia and Shikata are silent about further comprising analyzing the collection of recordings for the negligible content wherein negligible content is removed from the collection of recording.
Racz teaches further comprising analyzing the collection of recordings for the negligible content wherein negligible content is removed from the collection of recording (Para.[0133], [0189]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings Messia and Shikata with the above teachings of Racz in order to improve usage of storage space.
As per claim 12, the limitations in the claim 12 has been discussed in the rejection claim 2 and rejected under the same rationale.
Messia in view of Shikata and Glaser
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Messia et al.(USPubN 2023/0171568; hereinafter Messia) in view of Shikata et al.(USPubN 2019/0199992; hereinafter Shikata) further in view of Glaser(USPubN 2021/0201431)
As per claim 4, Messia and Shikata teach all of limitation of claim 1.
Messia and Shikata are silent about further comprising analyzing a static background of the content capturing the event and recorded on the one or more devices to determine desirability of the content.
Shikata teaches wherein the analyzing comprises using background matching to further determine the geographical location and the orientation of the one or more devices(“the information processing apparatus 100 may acquire not only the position information about the user terminal 300 but also information about the orientation of the user terminal 300. In this case, the information processing apparatus 100 acquires information about the orientation acquired through the electronic compass of the user terminal 300. The information processing apparatus 100 may identify a subject as the rotation base point based on the position and orientation of the user terminal 300. For example, the information processing apparatus 100 may identify a player existing in the direction in which the user terminal 300 is oriented by the user as the rotation base point.” in Para.[0068]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings Messia with the above teachings of Shikata in order to improve user experience.
Glaser teaches further comprising analyzing a static background of the content capturing the event and recorded on the one or more devices to determine desirability of the content, wherein the analyzing comprises using background matching to further determine the geographical location and the orientation of the one or more devices (Para.[0160], “the person identification module may use Bluetooth beaconing, computing device signature detection, computing device location tracking, and/or other techniques to facilitate the identification of a person.” in Para.[0065], “A location property that identifies a focus, point, or region of the interaction may be associated with a gesture or interaction. The location property is preferably 3D or shelf location “receiving” the interaction. An environment location property on the other hand may identify the position in the environment where a user or agent performed the gesture or interaction.” in Para.[0066], “The data models may include a product location map which includes models a data associations between product identifiers and locations in the environment. The locations may be based on the 2D floor location in an environment, the 3D location in the environment, an image location (e.g., where located in image data collected from the environment), and/or any suitable characterization of location. The data models may include a user location data model that tracks location of users. The data models may additionally include a modeling confidence map, which relates the CV modeling confidence of different regions and/or events to locations in the environment. The data models may additionally include an interaction history map that relates the occurrences of interactions to locations in environment” in Para.[0069], “an active camera can collect image data while adjusting orientation and/or zoom to move the field of view across a product stocking region (e.g., a product shelf).” in Para.[0138]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings Messia and Shikata with the above teachings of Glaser in order to improve user experience.
As per claim 14, the limitations in the claim 14 has been discussed in the rejection claim 4 and rejected under the same rationale.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUNGHYOUN PARK whose telephone number is (571)270-1333. The examiner can normally be reached M - Thur 6:00 am - 4 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI Q TRAN can be reached at (571)272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SUNGHYOUN PARK/Examiner, Art Unit 2484