kytos-ng / maintenance Goto Github PK
View Code? Open in Web Editor NEWThis project forked from kytos/maintenance
Kytos Maintenance Window NApp
Home Page: https://kytos-ng.github.io/api/maintenance.html
This project forked from kytos/maintenance
Kytos Maintenance Window NApp
Home Page: https://kytos-ng.github.io/api/maintenance.html
The unit tests are failing due to the following conflict:
The conflict is caused by:
The user requested wrapt==1.10.11
astroid 2.3.3 depends on wrapt==1.11.*
From kytos console:
/usr/local/lib/python3.7/dist-packages/apscheduler/util.py:95: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html
if obj.zone == 'local':
/usr/local/lib/python3.7/dist-packages/apscheduler/util.py:95:
def astimezone(obj):
"""
Interprets an object as a timezone.
:rtype: tzinfo
"""
if isinstance(obj, six.string_types):
return timezone(obj)
if isinstance(obj, tzinfo):
...
https://github.com/kytos-ng/maintenance/blob/master/models.py#L171
In order to indicate that maintenance
is the source of interruptions, maintenance
should use the new interruption framework specified in the blueprint added by kytos-ng/kytos#356. This will make it so that services can potentially ignore a maintenance interruption.
This call should return 200 as it does not create a new entity.
The documentation should be modified to provide the proper information according to the actions.
Original issue opened by @ArturoQuintana at kytos#35.
@hdiogenes @ajoaoff @rmotitsuki
It has been tested the API call /maintenance/{mw_id}/end on PATCH with the idea to execute an early end over a maintenance task that is running. Unfortunately, this task does not succeed.
The test code used is as follow:
def test_040_end_mw_on_switch(self):
# Setup maintenance window data
start = datetime.now() + timedelta(seconds=60)
end = start + timedelta(hours=1)
payload = {
"description": "mw for test 040",
"start": start.strftime(TIME_FMT),
"end": end.strftime(TIME_FMT),
"items": [
"00:00:00:00:00:00:00:02"
]
}
# Request maintenance schema creation, and extract the mw_id
json_data = self.request_maintenance(payload, False)
mw_id = json_data["mw_id"]
time.sleep(10)
api_url = KYTOS_API + '/maintenance/' + mw_id + '/end'
request = requests.patch(api_url)
assert request.status_code == 201`
```
Original issue opened by @italovalcy at kytos#18.
Description: when a switch X needs to be replaced (e.g., due to hardware failure), the orchestration tool needs to provide means to move the services using X to a different switch in the topology.
Expected deliverables:
The following situations are possible:
a) same hardware and same datapath id: no action from Kytos
---> just create an MW, disconnect cables from the old switch and connect to the new (as part of the MW, the operator needs to schedule a maintenance which will then move the services out of the switch. Actually, we need to discuss whether this will be part of the Maintenance Napp or a different Napp)
---> the maintenance and mef_eline napps already do this
---> it'd be good to know if mef_eline did the Right Thing, and the MW could be confirmed
---> how to know impact? mef_eline return how many circuits will be affected. one of those won't have an alternate path
---> check again at the beginning of the maintenance window
b) same hardware and different datapath id: the network operator adds a new datapath-id to the switch (or change the current one) and activate it (a MW needs to be schedule before, to move the circuits out of this switch). We need to save all information of the equipment to keep the historic. All operation should generate a log entry (e.g., switch changed serial number, switch changed datapath-id, etc)
c) different hardware and different datapath id (this case will be left for further discussion):
c.1) The network operator submits a "map of ports" (a list mapping the old ports to the new ones, with possible overlapping. Example: eth1 > eth2/1; eth2 > eth2/2; eth3 > eth2/1; eth4 > eth2/4; ...)
c.2) The orchestration tool check for inconsistencies on port mapping (e.g., services conflicting with the same VLAN id), and it tries to solve the inconsistency automatically (e.g., using different VLAN id along the path). In case some conflicts are not possible to fix automatically, return to the operator and ask for a new port mapping
c.3) change the services according to the port mapping
c.4) deactivate the old switch on the topology (if it is a replacement, wouldn't the "old" become the "new"? Yes, good idea! Let's keep only one and update the history of this node)
d) different hardware and same datapath id: very similar to previousews case (very similar to which one? If both switches will be online at the same time, it will be a problem. So, maybe it is not similar and it is the most dangerous case. let's split this in two problems: 1) let's assume that kytos can handle two switches with the same datapath-id, so this case is very similar to case C; 2) if Kytos doesn't support two switches with the same datapath-id we just need to disable the first, activate the new switch and apply procedure on C) (question: what happens if two switches with the same DPID connect to Kytos? good question, we need to check)
Currently, if the status
is set, which is meant for the backend to control the state of the maintenance window, it's being accepted, as a result users can create inconsistencies for instance a maintenance window that hasn't started yet, but is already finished:
{
"id": "some_mw",
"description": "some_mw",
"start": "2022-03-11T12:05:01-0300",
"end": "2022-03-11T14:38:01-0300",
"items": [
"{{dpid}}"
],
"status": 2
}
Aris was working in the FE part, and although the backend also returns a status
attribute it's not documented in the API, which led to confusion when trying to consume the endpoint and filter for results.
remove
button to delete a maintenance window consuming this endpoint in the detailed k-info-panel
that will be implemented on issue #37mef_eline
, it turns out was easier to get a sample to illustrate the layout of the components as wellmef_eline
UI has implemented similar componentschore: Replace TestCase
with pytest
test suite or test case
❯ rg TestCase -g "*.py"
tests/unit/managers/test_deployer.py
3:from unittest import TestCase
19:class TestDeployer(TestCase):
tests/unit/managers/test_scheduler.py
3:from unittest import TestCase
19:class TestScheduler(TestCase):
tests/unit/test_models.py
3:from unittest import TestCase
13:class TestMW(TestCase):
tests/unit/test_controller.py
3:from unittest import TestCase
12:class TestMaintenanceController(TestCase):
@Ktmi this is low priority, but if you could add to your radar just so it can be addressed in a next opportunity, that'd be great.
Original issue opened by @ArturoQuintana at kytos#44.
Example:
payload1 = {
"start": start.strftime(TIME_FMT),
"end": new_time.strftime(TIME_FMT)
}
mw_api_url = KYTOS_API + '/maintenance/' + mw_id
request = requests.patch(mw_api_url, data=json.dumps(payload1), headers={'Content-type': 'application/json'})
Reflected on end-to-end test:
tests/test_e2e_15_maintenance.py::TestE2EMaintenance::test_036_patch_mw_on_switch_should_fail_wrong_payload_items_empty
Example:
payload1 = {
"start": start.strftime(TIME_FMT),
"end": new_time.strftime(TIME_FMT)
}
mw_api_url = KYTOS_API + '/maintenance/' + mw_id
request = requests.patch(mw_api_url, data=json.dumps(payload1), headers={'Content-type': 'application/json'})
Reflected on end-to-end test:
tests/test_e2e_15_maintenance.py::TestE2EMaintenance::test_038_patch_mw_on_switch_should_fail_wrong_payload_no_items_field
Example:
payload1 = {
"description"
}
mw_api_url = KYTOS_API + '/maintenance/' + mw_id
request = requests.patch(mw_api_url, data=json.dumps(payload1), headers={'Content-type': 'application/json'})
Reflected on end-to-end test: tests/test_e2e_15_maintenance.py::TestE2EMaintenance::test_039_patch_mw_on_switch_should_fail_wrong_payload
Hi,
The MW attribute for links under maintenance is being sent wrongly from the UI to backend: instead of link
it should be links
in
maintenance/ui/k-toolbar/main.kytos
Line 123 in 99e69d5
As a consequence, the MW is not created in the UI when only links are provided (see image below)
Refine existing blueprint, make sure existing use cases are covered.
This call should return 201 as it modifies the maintenance window information.
The documentation should be modified to provide the proper information according to the actions.
k-tool-bar
component with inputs and a create button to consume the POST /maintenace
endpointitems
should provide a way to select ids from existing switches, interfaces and links.mef_eline
, it turns out was easier to get a sample to illustrate the layout of the components as wellmef_eline
UI has implemented similar componentsCurrently, when a maintenance window is requested, it is not possible to know ahead of time what service may be affected by it. This functionality is necessary for closing #2 and #3, as they both require visibility into the affected services to identify conflicts or unexpected changes. To implement this feature, the framework for previewing from EP037 should be used.
k-info-panel
to show detail information about a maintenance window from this endpoint GET /maintenace
endpointstart
, end
, description
and items
attributes edited and create a save maintenance window
buttonback to list
button to go back to the maintenance list that will be implemented on issue #36mef_eline
, it turns out was easier to get a sample to illustrate the layout of the components as wellmef_eline
UI has implemented similar componentsextend maintenance window
button consuming this endpoint and a k-input
component in the detailed k-info-panel
that will be implemented on issue #37mef_eline
, it turns out was easier to get a sample to illustrate the layout of the components as wellmef_eline
UI has implemented similar componentsOriginal issue opened by @italovalcy at kytos#37.
When we create a new maintenance window, the user input parameters are not validated: the only check is to make sure the item under MW is a link, interface, or switch (instance_of). It would be nice if we also check if the requested item is an actual existing switch, link, or interface in the topology.
The use case for that is: when the administrator creates a MW with a wrong switch id, for instance, it has no means to know that it was wrong (look at what was happening with the end-to-end tests: https://github.com/amlight/kytos-end-to-end-tests/pull/40/files#diff-cf70db90507ca6705079eaa8362c1b100d07ec22e3b188f8d69c56ce7a4793eaR75)
There may be cases where the administrator can create a MW for future equipment, so it would be nice to allow skip the validation with an option like force
(which could also ignore the validations for start/end time - if you will)
We don't have integrity validation error in place yet, so if an user sends the same mw id again on POST /maintenance/
it crashes the server:
{
"id": "some_mw2",
"description": "some_mw2",
"start": "2022-03-06T14:37:01-0300",
"end": "2022-03-06T14:38:01-0300",
"items": [
"{{dpid}}"
]
}
{
"code": 500,
"description": "The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.",
"name": "Internal Server Error"
}
Traceback (most recent call last):
File "/home/viniarck/repos/kytos/.direnv/python-3.9.4/lib/python3.9/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/viniarck/repos/kytos/.direnv/python-3.9.4/lib/python3.9/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/viniarck/repos/kytos/.direnv/python-3.9.4/lib/python3.9/site-packages/flask_cors/extension.py", line 161, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/home/viniarck/repos/kytos/.direnv/python-3.9.4/lib/python3.9/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/viniarck/repos/kytos/.direnv/python-3.9.4/lib/python3.9/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/viniarck/repos/kytos/.direnv/python-3.9.4/lib/python3.9/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/viniarck/repos/kytos/.direnv/python-3.9.4/lib/python3.9/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/viniarck/repos/napps/kytos/maintenance/main.py", line 79, in create_mw
self.scheduler.add(maintenance)
File "/home/viniarck/repos/napps/../napps/kytos/maintenance/models.py", line 214, in add
self.scheduler.add_job(maintenance.start_mw, 'date',
File "/home/viniarck/repos/kytos/.direnv/python-3.9.4/lib/python3.9/site-packages/apscheduler/schedulers/base.py", line 448, in add_job
self._real_add_job(job, jobstore, replace_existing)
File "/home/viniarck/repos/kytos/.direnv/python-3.9.4/lib/python3.9/site-packages/apscheduler/schedulers/base.py", line 872, in _real_add_job
store.add_job(job)
File "/home/viniarck/repos/kytos/.direnv/python-3.9.4/lib/python3.9/site-packages/apscheduler/jobstores/memory.py", line 41, in add_job
raise ConflictingIdError(job.id)
apscheduler.jobstores.base.ConflictingIdError: 'Job identifier (some_mw2-start) conflicts with an existing job'
This is for creating a PR to bump this NApp version 2023.1.0
, if you need a reference check out any of the other PRs that are bumping versions kytos-ng/sdntrace_cp#98
@Ktmi, can you help out with this minor one here? This PR can be open, but it'll only get merged once exploratory tests finishes on July 3rd. What we have on master so far, is only the API endpoints that now are run by starlette/uvicorn
instead of flask/werkzeug
.
Also another we'll need to make a decision this week: if epic_maintenance_v1
will still on time by Jun 16, otherwise it'll need to be released as a patch 2023.1.1
exceptionally this time.
Hi,
When adding a MW with duplicated ID, Kytos server returns a 500. error (Error 500 when creating a MW with same id of existing one
):
Feb 6 20:53:31 130c32ca043b kytos.core.controller:ERROR app:1449: Exception on /api/kytos/maintenance/v1 [POST]#012Traceback (most recent call last):#012 File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2073, in wsgi_app#012 response = self.full_dispatch_request()#012 File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1519, in full_dispatch_request#012 rv = self.handle_user_exception(e)#012 File "/usr/local/lib/python3.9/dist-packages/flask_cors/extension.py", line 165, in wrapped_function#012 return cors_after_request(app.make_response(f(*args, **kwargs)))#012 File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1517, in full_dispatch_request#012 rv = self.dispatch_request()#012 File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1503, in dispatch_request#012 return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)#012 File "//var/lib/kytos/napps/kytos/maintenance/main.py", line 108, in create_mw#012 self.scheduler.add(maintenance, force=force)#012 File "//var/lib/kytos/napps/../napps/kytos/maintenance/models.py", line 320, in add#012 self.db_controller.insert_window(window)#012 File "/usr/local/lib/python3.9/dist-packages/tenacity/__init__.py", line 324, in wrapped_f#012 return self(f, *args, **kw)#012 File "/usr/local/lib/python3.9/dist-packages/tenacity/__init__.py", line 404, in __call__#012 do = self.iter(retry_state=retry_state)#012 File "/usr/local/lib/python3.9/dist-packages/tenacity/__init__.py", line 349, in iter#012 return fut.result()#012 File "/usr/lib/python3.9/concurrent/futures/_base.py", line 433, in result#012 return self.__get_result()#012 File "/usr/lib/python3.9/concurrent/futures/_base.py", line 389, in __get_result#012 raise self._exception#012 File "/usr/local/lib/python3.9/dist-packages/tenacity/__init__.py", line 407, in __call__#012 result = fn(*args, **kwargs)#012 File "/usr/local/lib/python3.9/dist-packages/kytos/core/retry.py", line 25, in decorated#012 return func(*args, **kwargs)#012 File "//var/lib/kytos/napps/../napps/kytos/maintenance/controllers/__init__.py", line 64, in insert_window#012 self.windows.insert_one({#012 File "/usr/local/lib/python3.9/dist-packages/elasticapm/instrumentation/packages/base.py", line 210, in call_if_sampling#012 return self.call(module, method, wrapped, instance, args, kwargs)#012 File "/usr/local/lib/python3.9/dist-packages/elasticapm/instrumentation/packages/pymongo.py", line 94, in call#012 return wrapped(*args, **kwargs)#012 File "/usr/local/lib/python3.9/dist-packages/pymongo/collection.py", line 606, in insert_one#012 self._insert_one(#012 File "/usr/local/lib/python3.9/dist-packages/pymongo/collection.py", line 547, in _insert_one#012 self.__database.client._retryable_write(acknowledged, _insert_command, session)#012 File "/usr/local/lib/python3.9/dist-packages/pymongo/mongo_client.py", line 1399, in _retryable_write#012 return self._retry_with_session(retryable, func, s, None)#012 File "/usr/local/lib/python3.9/dist-packages/pymongo/mongo_client.py", line 1286, in _retry_with_session#012 return self._retry_internal(retryable, func, session, bulk)#012 File "/usr/local/lib/python3.9/dist-packages/pymongo/mongo_client.py", line 1320, in _retry_internal#012 return func(session, sock_info, retryable)#012 File "/usr/local/lib/python3.9/dist-packages/pymongo/collection.py", line 545, in _insert_command#012 _check_write_command_response(result)#012 File "/usr/local/lib/python3.9/dist-packages/pymongo/helpers.py", line 216, in _check_write_command_response#012 _raise_last_write_error(write_errors)#012 File "/usr/local/lib/python3.9/dist-packages/pymongo/helpers.py", line 188, in _raise_last_write_error#012 raise DuplicateKeyError(error.get("errmsg"), 11000, error)#012pymongo.errors.DuplicateKeyError: E11000 duplicate key error collection: napps.maintenance.windows index: id_1 dup key: { id: "28bcd26b963040eabf0007e27deec683" }, full error: {'index': 0, 'code': 11000, 'keyPattern': {'id': 1}, 'keyValue': {'id': '28bcd26b963040eabf0007e27deec683'}, 'errmsg': 'E11000 duplicate key error collection: napps.maintenance.windows index: id_1 dup key: { id: "28bcd26b963040eabf0007e27deec683" }'}
Should we accept ID as the input attribute? It looks like ID would be an internal field - auto-generated (as it currently is).
If we accept ID as an input parameter, it would be nice to have some validations and return a 4xx error instead of 500. But once again, I don't think we should allow ID to be received from the user.
It's not allowing to create a MW on an existing interface
linear,3
❯ curl -s http://0.0.0.0:8181/api/kytos/topology/v3/interfaces/ | jq '.interfaces["00:00:00:00:00:00:00:01:1"]'
{
"id": "00:00:00:00:00:00:00:01:1",
"name": "s1-eth1",
"port_number": 1,
"mac": "a2:be:76:40:16:7e",
"switch": "00:00:00:00:00:00:00:01",
"type": "interface",
"nni": false,
"uni": true,
"speed": 1250000000.0,
"metadata": {},
"lldp": true,
"active": true,
"enabled": true,
"status": "UP",
"status_reason": [],
"link": ""
}
❯ curl -H 'Content-Type: application/json' -X POST http://127.0.0.1:8181/api/kytos/maintenance/v1/ -d '{ "id": "some_mw", "description": "some_mw", "start": "2023-12-12T17:03:00-0300", "end": "2023-12-12T17:10:00-0300", "interfaces": [ "00:00:00:00:00:00:00:01:1" ] }'
{"description":"Window contains non-existant items: {'switches': [], 'interfaces': [], 'links': ['00:00:00:00:00:00:00:01:1']}","code":400}
Notice it returned 400, and returned that a link wasn't found, so it's incorrectly trying to find an interface as a link apparently. Also, trivial detail, but there's a typo "existant" instead of "existent"
cc'ing @Ktmi and @italovalcy for their information, for now leave this without any priority label (Italo feel free to set it if you'll need to use this in the short term), but good thing that links MW are being created, so it can be used trying to target the underlying interface. In the future, when this gets prioritized and fixed let's also cover this with e2e test too.
For more information, check out kytos-ng/kytos#337. Where applicable, the following libs should also be updated ipython, requests/certifi, pytest 7.2+. Ideally, issue kytos-ng/kytos#338 should be handled to simplify our team upgrade workflow too.
I was reviewing the spec when fixing issue #31, just realized this endpoint POST /maintenance/report
has been documented but not implemented yet, which was meant for simulating a maintenance and return a report with affected. So, I'm capturing this on this issue to be revisited later on
finish
button to complete a maintenance window consuming this endpoint in the detailed k-info-panel
that will be implemented on issue #37mef_eline
, it turns out was easier to get a sample to illustrate the layout of the components as wellmef_eline
UI has implemented similar componentsOriginal issue opened by @italovalcy at kytos#19.
Description: when the network operator needs to decommission a Link, the orchestration tool should follow a sequence of steps to ensure no service will be affected
Expected deliverables:
The following procedure should be followed to decommission a link:
a) move EVCs from the source link (the one being decommissioned) to a new destination link (what about EVC with the static path? What about non-EVCs?) (which services are we talking about: EVCs or Flow entries? Only EVCs)
- be careful with possible conflicts on the VLAN id used in source and destination link.
==> We need to discuss the role of MEF_ELINE Napp: is it supposed to do all the provisioning and maintenance/resilience of a link? Maybe we can have something like: mef_eline provisioning and mef_eline resilience (and mef_eline be only a meta package)
b) change the backup path of existing EVCs not to include this link under decommissioning
c) disable services which cannot be migrated
d) disable the interfaces
This call should return 200 as it does not create a new entity.
The documentation should be modified to provide the proper information according to the actions.
Python is getting updated, related. Some dependencies from this NApp need to be updated.
Currently, this NApp isn't using versioned routes in the rest endpoints, most of the NApps we maintain such as mef_eline
, flow_manager
, topology
, storehouse
, of_lldp
and so on uses a version like v[1-9]+
in the route path which makes easier to provide certain guarantees for clients
Original issue opened by @italovalcy at kytos#17.
Description: when the network operator needs to decommission a Switch, the orchestration tool should follow a sequence of steps to make sure no service will be affected
Expected deliverables:
The following procedure should be followed to decommission a link:
a) For each NNI or UNI on the switch, apply the same process as for Decommission a Link
and... ? What if it is an intra-switch service? if it is a intra-switch, you go to step C on the Link decommission (disable the service). The operator should have moved the intra-switch EVCs before (again: how is this integrated with Maintenance napp)
If kytos gets restarted the items under maintenance should remain disable and should be enabled after the maintenance window finishes. The way we are handling some items for now can lead to some of them gets theirs normal state if kytos gets restarted during a MW.
This is related to kytos-ng/kytos#347
This NApp is also vulnerable to RuntimeError dictionary changed size thread:
https://github.com/kytos-ng/maintenance/blob/master/main.py#L56
When we can afford to, this NApp should start using MongoDB issue #49
Trying to capture the discussion we had in the PR #64 (see thread):
@Ktmi what's the expected behavior if a non active or non existing switch (interface or link) id is present in any of the lists? that might be outside of the scope of this PR, but it's something that we need to align either in the blueprint or a new issue. This can also lead to invalid data being inserted into the collection, which although is not desirable, it might be acceptable if we make clear that's not being validated in this iteration that will be delivered.
@Ktmi Ktmi on Dec 8, 2022
So, Ive been thinking about this one, and I think that we should allow ids for things we can't confirm to exist to be used. My main reasoning for this is that when we create a maintenance window, we won't be 100% certain of the state of the network's topology. It could be that switches where removed or added, making some ids that were previously invalid into valid, or valid into invalid.
@viniarck viniarck on Dec 8, 2022
Right @Ktmi, I believe this would make sense for this iteration. It's probably something to be refined in the blueprint, just so network operators who will use in production are also aware of this, and then in the future we can keep evolving and even try to integrate more with topology as new requirements are discussed and aligned. Thanks.
@italovalcy italovalcy 19 hours ago
@viniarck and @Ktmi, the way I see is: we could have a parameter so that the operator informs Kytos that he/she is requesting an inexistent item on purpose and then the validation should be ignored. Something like force or skip_check_exists , etc.
All items sent are being treated as switches.
Original issue opened by @italovalcy at kytos#23.
Hi,
The maintenance in a switch or backbone interface (NNI) should move the dynamic EVCs to other path, however the EVCs remain in the same path as before the MW.
Steps to reproduce:
curl -s -X POST -H 'Content-type: application/json' http://localhost:8181/api/kytos/mef_eline/v2/evc -d '{"uni_z": {"tag": {"value": 300, "tag_type": 1}, "interface_id": "00:00:00:00:00:00:00:02:1"}, "uni_a": {"tag": {"value": 300, "tag_type": 1}, "interface_id": "00:00:00:00:00:00:00:01:1"}, "enabled": true, "name": "Vlan300_Test_evc1", "dynamic_backup_path": true}'
curl -s http://localhost:8181/api/kytos/mef_eline/v2/evc | jq -r '.[].current_path' | grep 00:00:00:00:00:00:00:02:2
curl -s -X POST -H 'Content-type: application/json' http://localhost:8181/api/kytos/maintenance -d '{"description": "my MW on switch 2", "start": "2021-03-05T22:19:00+0000", "end": "2021-03-05T22:20:00+0000", "items": ["00:00:00:00:00:00:00:02:2"]}'
Mar 5 22:19:00 10ae9a814839 apscheduler.executors.default:INFO base:123: Running job "MaintenanceWindow.start_mw (trigger: date[2021-03-05 22:19:00 UTC], next run at: 2021-03-05 22:19:00 UTC)" (scheduled at 2021-03-05 22:19:00+00:00)
Mar 5 22:19:00 10ae9a814839 apscheduler.executors.default:INFO base:144: Job "MaintenanceWindow.start_mw (trigger: date[2021-03-05 22:19:00 UTC], next run at: 2021-03-05 22:19:00 UTC)" executed successfully
Mar 5 22:19:00 10ae9a814839 apscheduler.scheduler:INFO base:632: Removed job 74c17ba02eaf4742ba898769f89e04a1-start
Expected result: the EVC should be moved from the path sw1--sw2 to sw1--sw3--sw2
Actual behavior: the EVC remains in the path sw1--sw2:
$ curl -s http://localhost:8181/api/kytos/mef_eline/v2/evc | jq -r '.[].current_path' | grep 00:00:00:00:00:00:00:02:2
"id": "00:00:00:00:00:00:00:02:2",
mininet> sh for s in s1 s2 s3; do echo ====== $s; ovs-ofctl dump-flows $s; done
====== s1
cookie=0xf9a50b5e84724c53, duration=2210.578s, table=0, n_packets=6627, n_bytes=675182, in_port="s1-eth1",dl_vlan=300 actions=mod_vlan_vid:300,mod_vlan_vid:1082,output:"s1-eth3"
cookie=0xf9a50b5e84724c53, duration=2210.550s, table=0, n_packets=6627, n_bytes=701690, in_port="s1-eth3",dl_vlan=1082 actions=strip_vlan,output:"s1-eth1"
cookie=0x0, duration=2213.127s, table=0, n_packets=1448, n_bytes=60816, priority=1000,dl_vlan=3799,dl_type=0x88cc actions=CONTROLLER:65535
====== s2
cookie=0xf9a50b5e84724c53, duration=2210.561s, table=0, n_packets=6627, n_bytes=675182, in_port="s2-eth1",dl_vlan=300 actions=mod_vlan_vid:300,mod_vlan_vid:1082,output:"s2-eth2"
cookie=0xf9a50b5e84724c53, duration=2210.559s, table=0, n_packets=6627, n_bytes=701690, in_port="s2-eth2",dl_vlan=1082 actions=strip_vlan,output:"s2-eth1"
cookie=0x0, duration=2213.146s, table=0, n_packets=1448, n_bytes=60816, priority=1000,dl_vlan=3799,dl_type=0x88cc actions=CONTROLLER:65535
====== s3
cookie=0x0, duration=2213.165s, table=0, n_packets=1448, n_bytes=60816, priority=1000,dl_vlan=3799,dl_type=0x88cc actions=CONTROLLER:65535
Hi,
Trying to capture the discussion on PR #64:
@viniarck viniarck 3 weeks ago
@Ktmi, for these cases flask provides jsonify (that we also use in other NApps in the responses instead of building and hooking into reponse_class), I think the intention would be clearer and cleaner with it:
maintenances = self.scheduler.list_maintenances().dict()["__root__"] return jsonify(maintenances), 200
Could you adapt this? Same comment for /v1/<mw_id> below.
@Ktmi Ktmi 3 weeks ago
I used pydantics jsonfiy specifically to match TIME_FMT we are using. If we remove the requirement to match TIME_FMT, then we can perform marshalling with flask. This should also fix the issue with submitting times that don't include tz info.
@viniarck viniarck 3 weeks ago
I see. So, this won't be a blocker for this PR, since the TIME_FMT with %z was already in place historically, but since we're stabilizing the v1 of this NApp, I'll encourage you to continue this discussion, supporting timezones other than UTC with the %z offset fmt can really complicate things and make it harder to maintain and also to correlate events (good thing that on logs we'll have UTC on each log entry). Maybe we can nuke pytz and only support UTC in the v1 of this NApp? It would be great to confirm and have a discussion with network operators. In the old blueprint draft (that will be refined) on issue #59 we also didn't have any requirement to support arbitrary timezones.
We've been using UTC everywhere on NApps and core (core also natively supports encoding datetime with this "%Y-%m-%dT%H:%M:%S format.
k-info-panel
component to list maintenance windows from GET /maintenace
endpointstatus
value should be shown with their numeric and string value 0 - pending, 1 - running, 2 - finished
id
, description
, start
, end
, and items
mef_eline
, it turns out was easier to get a sample to illustrate the layout of the components as wellmef_eline
UI has implemented similar componentsWhen topology links move to kytos core kytos-ng/kytos#92, this NApps shouldn't be directly accessing self.controller.napps[("kytos", "topology")]
:
https://github.com/kytos-ng/maintenance/blob/master/main.py#L232
More NApp-to-NApp access will be allowed via NApp
instance ref but that will still be specified.
maintenance
, following the register_status_func
hooks pattern should update it accordingly when it enters in maintenance, in addition to keep sending the existing events, that way the status
attribute is correctly detected as EntityStatus.DOWN
playing well with other features and solving this issue here kytos-ng/topology#62, so on topology all of the disabled/deactivations are supposed to be removed too, only notifications on topology should remain.
Original issue opened by @ArturoQuintana at kytos#43.
Example:
# Sets up a wrong maintenance window data
payload = {
"description": "my MW on switch 2",
"start": start.strftime(TIME_FMT),
"end": end.strftime(TIME_FMT),
"items": []
}
reflected on end-to-end test: tests/test_e2e_15_maintenance.py::TestE2EMaintenance::test_024_create_mw_on_switch_should_fail_items_empty
In the case where the payload does not have an items field, it is returning 500 when should be 400.
Example:
# Sets up a wrong maintenance window data
payload = {
"description": "my MW on switch 2",
"start": start.strftime(TIME_FMT),
"end": end.strftime(TIME_FMT)
}
reflected on end-to-end test:
tests/test_e2e_15_maintenance.py::TestE2EMaintenance::test_026_create_mw_on_switch_should_fail_no_items_field_on_payload
Example:
# Sets up a wrong maintenance window data
payload = {"a"}
reflected on end-to-end test:
tests/test_e2e_15_maintenance.py::TestE2EMaintenance::test_029_create_mw_on_switch_should_fail_wrong_payload
Same as kytos-ng/mef_eline#185 for a future opportunity
In the initial iteration, this NApp is still storing the maintenance windows in a Python dict, however, to be prod grad and provide permanent storage it should implement a NoSQL storage (for more context, in this kytos-ng release 2022.2, topology will be migrated, and other NApps will follow suit later, so maintenance also being a potential candidate too depending on the priorities for 2022.3).
Original issue opened by @ArturoQuintana at kytos#39.
@ajoaoff @rmotitsuki @italovalcy @hdiogenes
During maintenance, new problems can arise that might not be contemplated as part of the process. These cases use to increase the maintenance windows (mostly). Therefore, it will be useful to have the capability of extending running maintenances.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.