Prosecution Insights
Last updated: April 19, 2026
Application No. 17/821,080

METHOD AND SYSTEM FOR REMOTE VIDEO MONITORING AND REMOTE VIDEO BROADCAST

Non-Final OA §103§112
Filed
Aug 19, 2022
Examiner
CARLSON, JEFFREY D
Art Unit
3992
Tech Center
3900
Assignee
Kyocera Corporation
OA Round
3 (Non-Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
3y 9m
To Grant
50%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
40 granted / 147 resolved
-32.8% vs TC avg
Strong +22% interview lift
Without
With
+22.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
15 currently pending
Career history
162
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
32.2%
-7.8% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
34.6%
-5.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 147 resolved cases

Office Action

§103 §112
REISSUE OFFICE ACTION The present application is being examined under the pre-AIA first to invent provisions. This is a reissue office action for US Patent 9,131,257, which included original patent claims 1–56. Applicant requested amendment of the claims on 6/24/2025. Claims 57–75 are pending. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 6/24/2025 has been entered. Declaration and Reason for Reissue This Reissue has been filed pursuant to the original patent being at least partly inoperative or invalid by reason of “claiming more or less than he had the right”, specifically: “Applicant submits U.S. Patent No. 9,131,257 is deemed wholly or partly inoperative or invalid by reason of the patentee claiming less than he had a right to claim in the patent. In particular, original claims 1–56 of the patent recite registering operation for registering a plurality of subscribers or viewing devices. As a result, claims 1-56 are too narrow to protect the disclosed invention. New independent claim 1 and dependent claims 2-10 are provided to cover a “distribution system for an image distribution via a network, comprising a video capture device generating and transmitting image data of 360-degree video image; and a server receiving the transmitted image data and delivering the image data to a viewing device depending on a request from the viewing device.” New independent claim 11 is provided to cover a method of the same. All errors being corrected in this continuation application of a reissue application Serial No. 15/700,072 arose without deceptive intent” (10/20/2022 declaration p. 1). 35 USC § 112 Rejections The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 70 and 71 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claims 70 and 71, there is no antecedent basis for “assigned website”. 35 USC § 251 Rejections Claims 57–64, 67–75 are rejected under 35 U.S.C. 251 as being an impermissible recapture of broadened claimed subject matter surrendered in the application for the patent upon which the present reissue is based. See Greenliant Systems, Inc. et al v. Xicor LLC, 692 F.3d 1261, 103 USPQ2d 1951 (Fed. Cir. 2012); In re Shahram Mostafazadeh and Joseph O. Smith, 643 F.3d 1353, 98 USPQ2d 1639 (Fed. Cir. 2011); North American Container, Inc. v. Plastipak Packaging, Inc., 415 F.3d 1335, 75 USPQ2d 1545 (Fed. Cir. 2005); Pannu v. Storz Instruments Inc., 258 F.3d 1366, 59 USPQ2d 1597 (Fed. Cir. 2001); Hester Industries, Inc. v. Stein, Inc., 142 F.3d 1472, 46 USPQ2d 1641 (Fed. Cir. 1998); In re Clement, 131 F.3d 1464, 45 USPQ2d 1161 (Fed. Cir. 1997); Ball Corp. v. United States, 729 F.2d 1429, 1436, 221 USPQ 289, 295 (Fed. Cir. 1984). The reissue application contains claim(s) that are broader than the issued patent claims. The record of the application for the patent shows that the broadening aspect (in the reissue) relates to claimed subject matter that applicant previously surrendered during the prosecution of the application. Accordingly, the narrow scope of the claims in the patent was not an error within the meaning of 35 U.S.C. 251, and the broader scope of claim subject matter surrendered in the application for the patent cannot be recaptured by the filing of the present reissue application. MPEP 1412.02(I) describes the following three step test for determining whether impermissible recapture exists. See also the flowchart in MPEP 1412.02(VI). (1) Determine whether, and in what respect, the reissue claims are broader in scope than the original patent claims; (2) Determine whether the broader aspects of the reissue claims relate to subject matter surrendered in the original prosecution; and (3) Determine whether the reissue claims were “materially narrowed” in other respects, so as to avoid the recapture rule. Steps (1) and (2) During the prosecution of 13/519,065 and subsequent to an art rejection, applicant added/argued1 the following claim limitation(s) in seeking allowance of the application: Date Prosecuted Claim(s)2 Surrender Generating Limitation (SGL) 11/14/2014 55, 81, 98, 104 SGL1a: cellular communication link SGL1b: cellular communication means 3/3/2015 55, 81, 98, 104 SGL 2a: viewers being temporarily allowed access to the video images over the network without being a subscriber SGL 2b: viewing devices being temporarily allowed access to the image data over the network without being a subscriber 11/14/2014 55, 98 SGL 3a: transmitting requested video images to the at least one of the respective one of the plurality of subscribers and a plurality of authorized viewers SGL 3b: transmitting means for transmitting requested video images to the at least one of the respective one of the plurality of subscribers and a plurality of authorized viewers 8/10/2015 55, 81 SGL 4a: wherein controlling access of the subscribers includes setting up a plurality of subscribers with a custom GUI application on a viewing device SGL 4b: wherein at least one viewing device accesses the image data corresponding to the at least one video capture device via a custom GUI application on the viewing device 8/10/2015 98 SGL 5: wherein the managing means receives control information from at least one of the respective one of the plurality of subscribers and a plurality of viewers for customized viewing by independently applying a set of user control instructions 8/10/2015 104 SGL: 6 wherein the receiving means receives control signals so that the viewing devices can independently customize viewing orientation of the image data Reissue Claim Missing SGL 57–64, 67–75 SGL 4 Step (3) Because no single patented independent claim included all six of SGL 1–6, no reissue claim would require all six of SGL 1–6. Applicant has identified3 patented claim 28 (prosecuted as 81) to serve as the basis for independent reissue claims 57 and 67. Therefore, the reissue claims will be analyzed for potential recapture issues according to the SGL present in prosecuted claim 81 – namely SGL 1, 2 and 4. Claims 57–69 lack SGL 4. Each independent claim eliminates the surrendered subject matter in its entirety. Therefore there is no “material narrowing” present. As per MPEP 1412.02(II)(C), “[i]f surrendered subject matter has been entirely eliminated from a claim in the reissue application, then a recapture rejection under 35 U.S.C. 251 is proper and must be made for that claim.” The flowchart likewise instructs the Examiner to “Make recapture rejection” when the final flowchart inquiry results in a “No” which is the case here: PNG media_image1.png 156 343 media_image1.png Greyscale Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claims 57–58 and 63–71 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over US 8,675,071 (Slavin) in view of US 7,576,767 (Lee). 57. A distribution system for an image distribution via a network, comprising: “Techniques are described for video monitoring and alarm verification technology” (Slavin 1:32–33). “the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52). a video capture device having a plurality of cameras for capturing image data to generate 360-degree video image, and Slavin does not appear to teach a plurality of cameras for generating 360-degree video images. Lee teaches a “Panoramic Vision System” (at Title) including multiple cameras and processing to create a composite 360-degree image. Lee also describes such use in surveillance system: “The system uses image acquisition devices to capture a scene up to 360°…A display system is then used to display the resulting composite image” (Lee at Abstract). “a plurality of image acquisition devices to capture image frame data from a scene and to generate image sensor inputs, said image frame data collectively covering up to a 360° field of view” (Lee 2:18–22). “surveillance system covering up to 360° horizontal or 4π steradian field of view of the exterior and/or interior of a structure such as a building” (Lee 4:16–18). “The pan and zoom functions are provided digitally by image processor 200” (Lee 12:54–55). It would have been obvious to one of ordinary skill at the time of the invention to have provided multiple cameras with that of Slavin in order to provide composite 360-degree video for the viewers. One of ordinary skill would have recognized that this would have provided more compelling video of the entire 360° scene and enabled zooming onto various subjects within the scene. configured to transmit the captured image data to a server via a cellular communication link, “the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52). wherein the video capture device is configured to record the image data in case of detecting motion within a view range, and configured not to record the image data in case of detecting no motion within the view range; “The camera 115 also may begin capturing video and initiate establishment of the connection with the mobile phone 130 based on the user 110 triggering a motion sensor (e.g., a Passive Infrared Motion detector) included in the camera 115. The camera 115 further may begin capturing and locally storing video based on the security system panel 120 detecting the door opening event or trigger of its own internal motion sensor and then initiate establishment of the connection with the mobile phone 130 based on the security system panel 120 detecting the alarm condition” (Slavin 3:13–22). “a Passive Infra Red (PIR) motion sensor may be built into the camera 230 and used to trigger the camera 230 to capture one or more images when motion is detected. The camera 230 also may include a microwave motion sensor built into the camera and used to trigger the camera 230 to capture one or more images when motion is detected. The camera 230 may have a “normally open” or “normally closed” digital input that can trigger capture of one or more images when external sensors (e.g., the sensors 220, PIR, door/window, etc.) detect motion or other events. In some implementations, the camera 230 receives a software command to capture an image when external devices detect motion. The camera 230 may receive the software command from the controller 212 or directly from one of the sensors 220” (Slavin 7:62–8:9). Slavin’s discussion of a “normally open” digital input that can trigger capture of images when the motion sensor detects a motion event indicates that the capture of images is started in association with the event and its initiation and is stopped in association with the event and its conclusion. In this manner the system is able to record each detected event and does not simply record forever upon the first detected event. the server configured to register an authorized user to a service providing online viewing of video images, and “a link that the recipient can use to log into an event portal that provides access to sensor status, live video, and/or saved video files relevant to the event. The link also may open a portal that displays a shared image of the customer web/mobile portal screen, allowing the customer to control what data/video/ image is shared with the recipient. In these implementations, the owner of the alarm system maintains control over the information from the monitoring system that is shared” (Slavin 2:28–36). “For some customers, privacy is a significant issue for home use of video or still-image security cameras. For that reason, those customers often do not want anyone else (including central station operators and police) to have access to images or video captured by home security system cameras without the explicit permission of the system owner. In some implementations, the techniques described throughout this disclosure allow the system owner to share video and other event data with third parties (e.g., pre-specified third parties) in the event of an emergency without granting access during non-emergencies, which may be defined by the system owner” (Slavin 1:60–2:4). receive the image data transmitted from the video capture device, and to deliver the image data to a viewing device depending on a request from the viewing device to display the video image on the viewing device, and “The monitoring application server 260 may store sensor and image/video data received from the monitoring system…The monitoring application server 260 also may make images/video captured by the camera 230 available to the one or more user devices 240, 250 over the network 205 (e.g., through a web portal)” (Slavin 8:49–58). “the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52). “The mobile phone 640 of the mom user includes a video display area 641 that displays the live video captured by the first camera 610 on the mobile phone 640” (Slavin 17:25–27). “The mobile phone 640 of the mom user includes a video display area 641 that displays the live video captured by the first camera 610 on the mobile phone 640. The mobile phone 640 also displays a list of virtual buttons 642 and 643 that the mom user can activate to initiate sharing of the live video to one or more other devices. The mobile phone 640 further displays a start button 645 and a stop button 646 that the mom user can activate to control recording of the live video on electronic storage of the mobile phone 640” (Slavin 17:25–33). the server is configured to temporarily allow access of at least one viewer to the image data over a network without being the authorized user. “the one or more user devices 240, 250 initiate the sharing connection based on a user input command entered by a user after reviewing video or image data from the camera 230. In these implementations, the one or more user devices 240, 250 may provide the one or more third party devices 270, 280 with information needed to establish the sharing connection in response to the user input command. For instance, the information may include a link that opens a portal that displays a shared image of a portal screen (e.g., a customer web/mobile portal screen) that allows the user operating the user device to control what data/video/image is shared with the recipient operating the third party device. The information also may include credentials, such as a password, a machine token, etc. that the third party device can use to be authenticated to the user device. The credentials may be temporary or one-time access credentials to prevent the third party device from using the credentials to gain access to the monitoring system at a later date” (Slavin 12:22–39). 58. The distribution system according to claim 57, further comprising the viewing device configured to selectively display a view of the 360-degree video image from the image data. As stated above for claim 57: It would have been obvious to one of ordinary skill at the time of the invention to have provided multiple cameras with that of Slavin in order to provide composite 360-degree video for the viewers. One of ordinary skill would have recognized that this would have provided more compelling video of the entire 360° scene and enabled zooming onto various subjects within the scene. 63. The distribution system according to claim 57, wherein the server is configured to deliver the image data to the viewing device of a subscriber of a service for delivering the image data. The “owner” user of Slavin (who specifies the third parties who may receive shared video content) is taken to represent an authorized participant or “subscriber” of Slavin’s video event portal which monitors the owner user’s property. 64. The distribution system according to claim 57, further comprising: a plurality of video capture devices including the video capture device; and the viewing device configured to display a view of the 360-degree video image selected among plural pieces of image data captured and transmitted from the plurality of video capture devices. Slavin teaches providing multiple cameras for multiple coverage areas: “As shown, a property 605 includes a first camera 610 located in a daughter's room, a second camera 620 located in a son's room, and a third camera 630 located in a dad's study” (Slavin 17:18–21). As stated above for claim 57: It would have been obvious to one of ordinary skill at the time of the invention to have provided multiple cameras with that of Slavin in order to provide composite 360-degree video for the viewers. One of ordinary skill would have recognized that this would have provided more compelling video of the entire 360° scene and enabled zooming onto various subjects within the scene. 65. The distribution system according to claim 57, further comprising the viewing device configured to display a view of the 360-degree video image by a custom graphical user interface (GUI) application. “The user device 240 may load or install the native surveillance application 242 based on data received over a network or data received from local media. The native surveillance application 242 runs on mobile devices platforms, such as iPhone, iPod touch, Blackberry, Google Android, Windows Mobile, etc. The native surveillance application 242 enables the user device 240 to receive and process image, video, and/or sensor data from the monitoring system” (Slavin 9:21–29). 66. The distribution system according to claim 65, wherein the viewing device is configured to download the custom GUI application via a social networking site. Slavin states that the application may be installed based on data received over a network. This represents downloading and installing the application. Slavin does not take time to explicitly state from where the application may be downloaded. However, the same structural configuration of the viewing device that would allow the application to be downloaded and installed from remote location A would likewise be present for downloading and installing from remote location B. The download location source does not impart any structural feature of the device. Nonetheless, given that the user device would be connecting to Slavin’s portal server to view the video content, it would have been obvious at the time the invention was made to have provided the application as a download from such a portal server. Slavin’s portal server represents a social networking site in that different user may chat with one another (e.g. Slavin at 15:24–27). 67. A method of distributing image data via a network, comprising: “Techniques are described for video monitoring and alarm verification technology” (Slavin 1:32–33). “the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52). registering an authorized user to a service providing online viewing of video images; “a link that the recipient can use to log into an event portal that provides access to sensor status, live video, and/or saved video files relevant to the event. The link also may open a portal that displays a shared image of the customer web/mobile portal screen, allowing the customer to control what data/video/ image is shared with the recipient. In these implementations, the owner of the alarm system maintains control over the information from the monitoring system that is shared” (Slavin 2:28–36). “For some customers, privacy is a significant issue for home use of video or still-image security cameras. For that reason, those customers often do not want anyone else (including central station operators and police) to have access to images or video captured by home security system cameras without the explicit permission of the system owner. In some implementations, the techniques described throughout this disclosure allow the system owner to share video and other event data with third parties (e.g., pre-specified third parties) in the event of an emergency without granting access during non-emergencies, which may be defined by the system owner” (Slavin 1:60–2:4). capturing image data to generate 360-degree video image by a plurality of cameras of a video capture device, Slavin does not appear to teach a plurality of cameras for generating 360-degree video images. Lee teaches a “Panoramic Vision System” (at Title) including multiple cameras and processing to create a composite 360-degree image. Lee also describes such use in surveillance system: “The system uses image acquisition devices to capture a scene up to 360°…A display system is then used to display the resulting composite image” (Lee at Abstract). “a plurality of image acquisition devices to capture image frame data from a scene and to generate image sensor inputs, said image frame data collectively covering up to a 360° field of view” (Lee 2:18–22). “surveillance system covering up to 360° horizontal or 4π steradian field of view of the exterior and/or interior of a structure such as a building” (Lee 4:16–18). “The pan and zoom functions are provided digitally by image processor 200” (Lee 12:54–55). It would have been obvious to one of ordinary skill at the time of the invention to have provided multiple cameras with that of Slavin in order to provide composite 360-degree video for the viewers. One of ordinary skill would have recognized that this would have provided more compelling video of the entire 360° scene and enabled zooming onto various subjects within the scene. wherein the video capture device is configured to record the image data in case of detecting motion within a view range, and configured not to record the image data in case of detecting no motion within the view range; “The camera 115 also may begin capturing video and initiate establishment of the connection with the mobile phone 130 based on the user 110 triggering a motion sensor (e.g., a Passive Infrared Motion detector) included in the camera 115. The camera 115 further may begin capturing and locally storing video based on the security system panel 120 detecting the door opening event or trigger of its own internal motion sensor and then initiate establishment of the connection with the mobile phone 130 based on the security system panel 120 detecting the alarm condition” (Slavin 3:13–22). “a Passive Infra Red (PIR) motion sensor may be built into the camera 230 and used to trigger the camera 230 to capture one or more images when motion is detected. The camera 230 also may include a microwave motion sensor built into the camera and used to trigger the camera 230 to capture one or more images when motion is detected. The camera 230 may have a “normally open” or “normally closed” digital input that can trigger capture of one or more images when external sensors (e.g., the sensors 220, PIR, door/window, etc.) detect motion or other events. In some implementations, the camera 230 receives a software command to capture an image when external devices detect motion. The camera 230 may receive the software command from the controller 212 or directly from one of the sensors 220” (Slavin 7:62–8:9). Slavin’s discussion of a “normally open” digital input that can trigger capture of images when the motion sensor detects a motion event indicates that the capture of images is started in association with the event and its initiation and is stopped in association with the event and its conclusion. In this manner the system is able to record each detected event and does not simply record forever upon the first detected event. transmitting the captured image data from the video capture device to a server via a cellular communication link; receiving, at the server, the image data transmitted from the video capture device; “the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52). “The monitoring application server 260 may store sensor and image/video data received from the monitoring system…The monitoring application server 260 also may make images/video captured by the camera 230 available to the one or more user devices 240, 250 over the network 205 (e.g., through a web portal)” (Slavin 8:49–58). delivering the image data from the server to a viewing device depending on a request from the viewing device to display the video image on the viewing device, and “the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52). “The mobile phone 640 of the mom user includes a video display area 641 that displays the live video captured by the first camera 610 on the mobile phone 640” (Slavin 17:25–27). “The mobile phone 640 of the mom user includes a video display area 641 that displays the live video captured by the first camera 610 on the mobile phone 640. The mobile phone 640 also displays a list of virtual buttons 642 and 643 that the mom user can activate to initiate sharing of the live video to one or more other devices. The mobile phone 640 further displays a start button 645 and a stop button 646 that the mom user can activate to control recording of the live video on electronic storage of the mobile phone 640” (Slavin 17:25–33). at least one viewer is temporarily allowed access to the image data over a network without being the authorized user. “the one or more user devices 240, 250 initiate the sharing connection based on a user input command entered by a user after reviewing video or image data from the camera 230. In these implementations, the one or more user devices 240, 250 may provide the one or more third party devices 270, 280 with information needed to establish the sharing connection in response to the user input command. For instance, the information may include a link that opens a portal that displays a shared image of a portal screen (e.g., a customer web/mobile portal screen) that allows the user operating the user device to control what data/video/image is shared with the recipient operating the third party device. The information also may include credentials, such as a password, a machine token, etc. that the third party device can use to be authenticated to the user device. The credentials may be temporary or one-time access credentials to prevent the third party device from using the credentials to gain access to the monitoring system at a later date” (Slavin 12:22–39). 68. The distribution system according to claim 57, wherein the viewing device is configured to: display an object for selecting a recording option; and save the image data on the server to play back the saved image data at later time, in response to selecting the object. “The mobile phone 130 further displays a start button 135 and a stop button 136 that the mom user can activate to control recording of the live video on electronic storage of the mobile phone 130” (Slavin 3:31–34). “the user device may provide the third party device with a link that the third party device can use to log into an event portal on the monitoring application server 260 that provides access to sensor status, live video, and saved clips relevant to the event. The material included in the event portal may be controlled by the user (e.g., using the user device) and access may be granted for a limited period of time” (Slavin 12:40–51). “The monitoring application server 260 may store sensor and image/video data received from the monitoring system…The monitoring application server 260 also may make images/video captured by the camera 230 available to the one or more user devices 240, 250 over the network 205 (e.g., through a web portal). In this regard, the one or more user devices 240, 250 may display images/video captured by the camera 230 from a remote location. This enables a user to perceive images/video of the user's property from a remote location and verify whether or not an alarm event is occurring at the user's property” (Slavin 8:49–63). Given Slavin’s teachings for providing start and stop buttons on the phone for the user to request selective recording of video, the teachings that the server stores recorded video and the teachings that the user controls the material included in the event portal, it would have been obvious at the time the invention was made to have provided a button on the user phone to request recording of video to be stored at the server. Doing so would allow recorded video to be available to other users and would eliminate the need for large storage capacity on the user’s phone. 69. The method according to claim 67, further comprising: displaying an object for selecting a recording option; and saving the image data on the server to play back the saved image data at later time, in response to selecting the object. “The mobile phone 130 further displays a start button 135 and a stop button 136 that the mom user can activate to control recording of the live video on electronic storage of the mobile phone 130” (Slavin 3:31–34). “the user device may provide the third party device with a link that the third party device can use to log into an event portal on the monitoring application server 260 that provides access to sensor status, live video, and saved clips relevant to the event. The material included in the event portal may be controlled by the user (e.g., using the user device) and access may be granted for a limited period of time” (Slavin 12:40–51). “The monitoring application server 260 may store sensor and image/video data received from the monitoring system…The monitoring application server 260 also may make images/video captured by the camera 230 available to the one or more user devices 240, 250 over the network 205 (e.g., through a web portal). In this regard, the one or more user devices 240, 250 may display images/video captured by the camera 230 from a remote location. This enables a user to perceive images/video of the user's property from a remote location and verify whether or not an alarm event is occurring at the user's property” (Slavin 8:49–63). Given Slavin’s teachings for providing start and stop buttons on the phone for the user to request selective recording of video, the teachings that the server stores recorded video and the teachings that the user controls the material included in the event portal, it would have been obvious at the time the invention was made to have provided a button on the user phone to request recording of video to be stored at the server. Doing so would allow recorded video to be available to other users and would eliminate the need for large storage capacity on the user’s phone. 70. The distribution system according to claim 57, wherein the server is configured to accommodate simultaneous logins to assigned website on the server and multiple viewing of images by a plurality of viewers. “The multi-user chat session may allow the mom user, the dad user, and the neighbor user to discuss the video that they are all simultaneously perceiving and come to a decision on how to handle the potential alarm situation” (Slavin 15:26–31). 71. The distribution system according to claim 57, wherein the authorized user has an option to share a login privilege with a plurality of viewers, and the server is configured to accommodate simultaneous logins to assigned website on the server and multiple viewing of images by the plurality of viewers. “the one or more user devices 240, 250 may provide the one or more third party devices 270,280 with permission or credentials to access the monitoring system directly or the monitoring application server 260 for a limited time” (Slavin 12:40–44). “The multi-user chat session may allow the mom user, the dad user, and the neighbor user to discuss the video that they are all simultaneously perceiving and come to a decision on how to handle the potential alarm situation” (Slavin 15:26–31). Claims 59–62 are rejected under pre-AIA 35 U.S.C. 103(a) as being Slavin and Lee in view of US 2007/0024706 (Brannon). 59. The distribution system according to claim 58, further comprising a plurality of viewing devices including the viewing device, and configured to receive the image data from the server and to simultaneously display the 360-degree video image, wherein each of the viewing devices is configured to display a different view of the 360-degree video image. Slavin teaches a plurality of viewing devices receiving the video content: “The monitoring application server 260 may store sensor and image/video data received from the monitoring system…The monitoring application server 260 also may make images/video captured by the camera 230 available to the one or more user devices 240, 250 over the network 205 (e.g., through a web portal)” (Slavin 8:49–58). “a link that the recipient can use to log into an event portal that provides access to sensor status, live video, and/or saved video files relevant to the event. The link also may open a portal that displays a shared image of the customer web/mobile portal screen, allowing the customer to control what data/video/ image is shared with the recipient. In these implementations, the owner of the alarm system maintains control over the information from the monitoring system that is shared” (Slavin 2:28–36). While Slavin teaches sharing content related to the event with other viewers simultaneously, neither Slavin nor Lee appear to describe independent user control of the image data in order to independently view different portions of the video image. Brannon also teaches a video distribution system where client viewers may each independently control aspects of the received video event scene for customized viewing/display: “Systems and methods for providing high-quality region of interest (HQ-ROI) viewing within an overall scene by enabling one or more HQ-ROIs to be viewed in a controllable fashion” (Brannon at ABSTRACT). Brannon notes the disadvantage with conventional video distribution where changing viewing parameters affects all viewers. Brannon further teaches improvements where each viewer may issue control commands to manipulate their own custom view of the event without affecting others: “The disclosed systems and methods may be implemented in one embodiment to enable optimized simultaneous viewing of multiple video sources for each individual viewing client. This is in contrast to conventional video viewing systems . . . standard single-stream camera sources . . . are designed such that a configuration change for any of the above parameters affects all viewers” (Brannon ¶ 0033). “a multi-stream video source may be optionally configured with the ability to spatially move the reference coordinates of an ROI stream within the scene's overall image, e.g., via some set of suitable control commands such as those implemented for Pan-Tilt-Zoom (PTZ) cameras. The ability to perform the ROI control logic may be implemented, for example, at a viewing application” (Brannon ¶ 0030). “video source 102 may be configured to accept commands (e.g., `Pan and Tilt` commands) that allow the client viewing application 122 to move the spatial coordinates of the 320H.times.180V HQ-ROI view/stream around within the scene” (Brannon ¶ 0093). “multi-stream HQ-ROI viewing capability may be implemented . . . to deliver two or more video streams to one or more viewing clients via a network medium. For example, as previously mentioned, a video source component and video access component may be separate components or integrated together as a single device, e.g., camera and stream server components may be one device.” (Brannon ¶ 0087). “Video access component . . . to communicate these multiple digital video streams (not shown separately in FIG. 3) across computer network medium 112 to multiple viewing clients 120a through 120n” (Brannon ¶ 0060). It would have been obvious to one of ordinary skill to have provided viewers of Slavin and Lee the ability to independently send control instructions for manipulating the video image so that each user can experience a customized viewing (e.g. panning, tilting and/or zooming) of the event. Doing so would have allowed the viewers to customize their viewing of such events. 60. The distribution system according to claim 59, wherein each of the viewing devices is configured to individually control a display of a common scene depending on user's operation without affecting other viewing devices. See claim 59. 61. The distribution system according to claim 58, further comprising a plurality of viewing devices including the viewing device, wherein the viewing device is configured to perform a display control depending on user's pan/tilt/zoom operation without affecting other viewing devices. See claim 59. 62. The distribution system according to claim 57, wherein the video capture device comprises two back to back cameras each having about 180-degree angle of view. Lee’s 360-degree panoramic vision system does not appear to explicitly describe “two back to back cameras each having about 180-degree angle of view”. However, given the teachings by Brannon to use multiple cameras that collectively cover a 360° field of view (e.g. 1:24–25, 2:18–21), it would have been obvious to one of ordinary skill at the time of the invention to have any number of cameras with fields of view that can collectively cover 360°, including 2 x 180°. Claims 72–73 are rejected under pre-AIA 35 U.S.C. 103(a) as being Slavin and Lee in view of US 2002/0167590 (Naidoo). 72. The distribution system according to claim 57, wherein the video capture device is configured to be programmed to stop recording of the image data after a certain period of time. Neither Slavin nor Lee appear to teach that the device is configured to be programmed to stop recording after a certain period of time. Naidoo teaches a security system with sensors to trigger video recording of an event (at Abstract, ¶ 0033–0034). Naidoo teaches that a user is able to adjust (i.e. “program”) the system to stop recording after certain time periods: “the customer or remote user 16 may able to adjust said predetermined basis including without limitation adjusting the recording times, duration, and total length of recordings” (Naidoo ¶ 0058). It would have been obvious at the time the invention was made to have configured the system of Slavin and Lee with programmable/adjustable recording durations. Doing so would have enabled the user to customize the recorded content. Doing so would have also enabled the user to choose a time period that meets a balance between preserving additional footage and saving resources related to media storage. 73. The method according to claim 67, wherein the video capture device is programmed to stop recording of the image data after a certain period of time. See claim 72. Claims 74–75 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Slavin and Lee in view of US 2010/0195810 (Mota). 74. The distribution system according to claim 57, wherein the video capture device is configured to record the image data in case of detecting sound, and configured not to record the image data in case of detecting no sound. Slavin teaches video capture to be triggered on detected motion events, but not on detected sound events. Mota teaches a security system with a video camera which is triggered on detected motion and/or sound events: “a security system having a camera, a sensor generating a signal in response to a triggering event, and a management module. The triggering event is one of…sound detection, motion detection…The management module is adapted to send data to be received by a remote communication device upon generation by the sensor of the signal generated in response to the triggering event” (Mota ¶ 0014). It would have been obvious to one of ordinary skill at the time of the invention to have included a sound detection sensor with that of Slavin and Lee in order to trigger video in response to sounds. Doing so would allow the users of Slavin to investigate the monitored area in the event of a suspicious sound. As stated above, Slavin’s capture of images is started in association with the detected event and its initiation and is stopped in association with the event and its conclusion. Triggering of video capture based on a detected sound event for the combination would likewise start and stop video capture in association with individual sound events. 75. The method according to claim 67, wherein the video capture device records the image data in case of detecting sound, and does not record the image data in case of detecting no sound. See claim 74. Response To Arguments § 103 – Prior Art Applicant states: “As presented above, amended independent claims 57 and 67 recite, among other features, "wherein the video capture device is configured to record the image data in case of detecting motion within a view range, and configured not to record the image data in case of detecting no motion within the view range." Applicant respectfully contends that none of the cited references describe the additional features recited in amended claims 57 and 67” (6/24/2025 Remarks p. 9). Examiner has addressed this new language in the rejections above. Notification of Proceedings and Material Information Applicant is reminded of the continuing obligation under 37 CFR 1.178(b), to timely apprise the Office of any prior or concurrent proceeding in which this patent is or was involved. These proceedings would include any trial before the Patent Trial and Appeal Board, interferences, reissues, reexaminations, supplemental examinations, and litigation. Applicant is further reminded of the continuing obligation under 37 CFR 1.56, to timely apprise the Office of any information which is material to patentability of the claims under consideration in this reissue application. These obligations rest with each individual associated with the filing and prosecution of this application for reissue. See also MPEP §§ 1404, 1442.01 and 1442.04. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEFFREY D CARLSON whose telephone number is (571) 272-6716. The examiner can normally be reached Mon-Fri 7:30 am to 5:00 pm, off 1st Fri. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Fuelling can be reached on (571) 270-1367. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JEFFREY D CARLSON/Primary Examiner, Art Unit 3992 Conferees: /C. Michelle Tarae/Reexamination Specialist, Art Unit 3992 /M.F/Supervisory Patent Examiner, Art Unit 3992 1 In Hester, supra, the Federal Circuit held that the surrender that forms the basis for impermissible recapture “can occur through arguments alone”. 142 F.3d at 1482, 46 USPQ2d at 1649. 2 Prosecuted claims issued as patented claims as follows: prosecuted claim 55 issued as patented claim 1 prosecuted claim 81 issued as patented claim 28 prosecuted claim 98 issued as patented claim 47 prosecuted claim 104 issued as patented claim 52 3 See 12/2/2024 Remarks p. 7–8.
Read full office action

Prosecution Timeline

Aug 19, 2022
Application Filed
Aug 19, 2022
Response after Non-Final Action
Jun 24, 2024
Non-Final Rejection — §103, §112
Dec 02, 2024
Response Filed
Jan 13, 2025
Final Rejection — §103, §112
Apr 25, 2025
Response after Non-Final Action
Jun 24, 2025
Request for Continued Examination
Jun 25, 2025
Response after Non-Final Action
Nov 17, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent RE50866
FRAUD PREVENTION TRADING AND PAYMENT SYSTEM FOR BUSINESS AND CONSUMER TRANSACTIONS
2y 5m to grant Granted Apr 14, 2026
Patent RE50843
Athletic Training Optimization
2y 5m to grant Granted Mar 31, 2026
Patent 12591853
METHOD FOR COLLABORATIVE DESIGN ON CONTEXT BOUNDARIES IN MODEL-BASED TOOLS
2y 5m to grant Granted Mar 31, 2026
Patent 12564794
AUTONOMOUS MOBILE BODY AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 03, 2026
Patent RE50760
DRIVER MONITORING SYSTEM AND DRIVER MONITORING METHOD
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
50%
With Interview (+22.5%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 147 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month